error: switch `m' requires a value

error: switch `m' requires a value 

错误:开关“m”需要一个值

 

解决方法:

输入 git status 后会出现以下

On branch master
Your branch and 'origin/master' have diverged,
and have 1 and 2 different commits each, respectively.
  (use "git pull" to merge the remote branch into yours)

意思是(使用“git pull”将远程分支合并到您的分支中)


All conflicts fixed but you are still merging.
  (use "git commit" to conclude merge)

意思是(使用“git commit”结束合并)

 

Changes to be committed:

        modified:   index.html

Changes not staged for commit:


  (use "git add <file>..." to update what will be committed)

  意思是(使用“git add<file>…”更新将提交的内容)


  (use "git checkout -- <file>..." to discard changes in working directory)

  意思是(使用“git checkout——<file>…”放弃工作目录中的更改)

        modified:   index.html

我的解决方案是 git add index.html

结束合并 git commit -m "modify"

把服务器代码拉下来跟你本地代码合并  git pull

把合并好的最新代码推送到服务器端     git push​​​​​​​

2019.7.11

  • 4
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 10
    评论
https://github.com/iBotPeaches/Apktool Introduction Basic First lets take a lesson into apk files. apks are nothing more than a zip file containing resources and compiled java. If you were to simply unzip an apk like so, you would be left with files such as classes.dex and resources.arsc. $ unzip testapp.apk Archive: testapp.apk inflating: AndroidManifest.xml inflating: classes.dex extracting: res/drawable-hdpi/ic_launcher.png inflating: res/xml/literals.xml inflating: res/xml/references.xml extracting: resources.arsc However, at this point you have simply inflated compiled sources. If you tried to view AndroidManifest.xml. You'd be left viewing this. P4F0\fnversionCodeversionNameandroid*http://schemas.android.com/apk/res/androidpackageplatformBuildVersionCodeplatformBuildVersionNamemanifestbrut.apktool.testapp1.021APKTOOL Obviously, editing or viewing a compiled file is next to impossible. That is where Apktool comes into play. $ apktool d testapp.apk I: Using Apktool 2.0.0 on testapp.apk I: Loading resource table... I: Decoding AndroidManifest.xml with resources... I: Loading resource table from file: 1.apk I: Regular manifest package... I: Decoding file-resources... I: Decoding values */* XMLs... I: Baksmaling classes.dex... I: Copying assets and libs... $ Viewing AndroidManifest.xml again results in something much more human readable <?xml version="1.0" encoding="utf-8" standalone="no"?> <manifest xmlns:android="https://schemas.android.com/apk/res/android" package="brut.apktool.testapp" platformBuildVersionCode="21" platformBuildVersionName="APKTOOL"/> In addition to XMLs, resources such as 9 patch images, layouts, strings and much more are correctly decoded to source form. Decoding The decode option on Apktool can be invoked either from d or decode like shown below. $ apktool d foo.jar // decodes foo.jar to foo.jar.out folder $ apktool decode foo.jar // decodes foo.jar to foo.jar.out folder $ apktool d bar.apk // decodes bar.apk to bar folder $ apktool decode bar.apk // decodes bar.apk to bar folder $ apktool d bar.apk -o baz // decodes bar.apk to baz folder Building The build option can be invoked either from b or build like shown below $ apktool b foo.jar.out // builds foo.jar.out folder into foo.jar.out/dist/foo.jar file $ apktool build foo.jar.out // builds foo.jar.out folder into foo.jar.out/dist/foo.jar file $ apktool b bar // builds bar folder into bar/dist/bar.apk file $ apktool b . // builds current directory into ./dist $ apktool b bar -o new_bar.apk // builds bar folder into new_bar.apk $ apktool b bar.apk // WRONG: brut.androlib.AndrolibException: brut.directory.PathNotExist: apktool.yml // Must use folder, not apk/jar file InfoIn order to run a rebuilt application. You must resign the application. Android documentation can help with this. Frameworks Frameworks can be installed either from if or install-framework, in addition two parameters -p, --frame-path <dir> - Store framework files into <dir> -t, --tag <tag> - Tag frameworks using <tag> Allow for a finer control over how the files are named and how they are stored. $ apktool if framework-res.apk I: Framework installed to: 1.apk // pkgId of framework-res.apk determines number (which is 0x01) $ apktool if com.htc.resources.apk I: Framework installed to: 2.apk // pkgId of com.htc.resources is 0x02 $ apktool if com.htc.resources.apk -t htc I: Framework installed to: 2-htc.apk // pkgId-tag.apk $ apktool if framework-res.apk -p foo/bar I: Framework installed to: foo/bar/1.apk $ apktool if framework-res.apk -t baz -p foo/bar I: Framework installed to: foo/bar/1-baz.apk Migration Instructions v2.1.1 -> v2.2.0 Run the following commands to migrate your framework directory Apktool will work fine without running these commands, this will just cleanup abandoned files unix - mkdir -p ~/.local/share; mv ~/apktool ~/.local/share windows - move %USERPROFILE%\apktool %USERPROFILE%\AppData\Local v2.0.1 -> v2.0.2 Update apktool to v2.0.2 Remove framework file $HOME/apktool/framework/1.apk due to internal API update (Android Marshmallow) v1.5.x -> v2.0.0 Java 1.7 is required Update apktool to v2.0.0 aapt is now included inside the apktool binary. It's not required to maintain your own aapt install under $PATH. (However, features like -a / --aapt are still used and can override the internal aapt) The addition of aapt replaces the need for separate aapt download packages. Helper Scripts may be found here Remove framework $HOME/apktool/framework/1.apk Eagle eyed users will notice resources are now decoded before sources now. This is because we need to know the API version via the manifest for decoding the sources Parameter Changes Smali/baksmali 2.0 are included. This is a big change from 1.4.2. Please read the smali updates here for more information -o / --output is now used for the output of apk/directory -t / --tag is required for tagging framework files -advance / --advanced will launch advance parameters and information on the usage output -m / --match-original is a new feature for apk analysis. This retains the apk is nearly original format, but will make rebuild more than likely not work due to ignoring the changes that newer aapt requires After [d]ecode, there will be new folders (original / unknown) in the decoded apk folder original = META-INF folder / AndroidManifest.xml, which are needed to retain the signature of apks to prevent needing to resign. Used with -c / --copy-original on [b]uild unknown = Files / folders that are not part of the standard AOSP build procedure. These files will be injected back into the rebuilt APK. apktool.yml collects more information than last version SdkInfo - Used to repopulate the sdk information in AndroidManifest.xml since newer aapt requires version information to be passed via parameter packageInfo - Used to help support Android 4.2 renamed manifest feature. Automatically detects differences between resource and manifest and performs automatic --rename-manifest-package on [b]uild versionInfo - Used to repopulate the version information in AndroidManifest.xml since newer aapt requires version information to be passed via parameter compressionType - Used to determine the compression that resources.arsc had on the original apk in order to replicate during [b]uild unknownFiles - Used to record name/location of non-standard files in an apk in order to place correctly on rebuilt apk sharedLibrary - Used to help support Android 5 shared library feature by automatically detecting shared libraries and using --shared-lib on [b]uild Examples of new usage in 2.0 vs 1.5.x Old (Apktool 1.5.x) New (Apktool 2.0.x) apktool if framework-res.apk tag apktool if framework-res.apk -t tag apktool d framework-res.apk output apktool d framework.res.apk -o output apktool b output new.apk apktool b output -o new.apk v1.4.x -> v1.5.1 Update apktool to v1.5.1 Update aapt manually or use package r05-ibot via downloading Mac, Windows or Linux Remove framework file $HOME/apktool/framework/1.apk Intermediate Framework Files As you probably know, Android apps utilize code and resources that are found on the Android OS itself. These are known as framework resources and Apktool relies on these to properly decode and build apks. Every Apktool release contains internally the most up to date AOSP framework at the time of the release. This allows you to decode and build most apks without a problem. However, manufacturers add their own framework files in addition to the regular AOSP ones. To use apktool against these manufacturer apks you must first install the manufacturer framework files. Example Lets say you want to decode HtcContacts.apk from an HTC device. If you try you will get an error message. $ apktool d HtcContacts.apk I: Loading resource table... I: Decoding resources... I: Loading resource table from file: 1.apk W: Could not decode attr value, using undecoded value instead: ns=android, name=drawable W: Could not decode attr value, using undecoded value instead: ns=android, name=icon Can't find framework resources for package of id: 2. You must install proper framework files, see project website for more info. We must get HTC framework resources before decoding this apk. We pull com.htc.resources.apk from our device and install it $ apktool if com.htc.resources.apk I: Framework installed to: 2.apk Now we will try this decode again. $ apktool d HtcContacts.apk I: Loading resource table... I: Decoding resources... I: Loading resource table from file: /home/brutall/apktool/framework/1.apk I: Loading resource table from file: /home/brutall/apktool/framework/2.apk I: Copying assets and libs... As you can see. Apktool leveraged both 1.apk and 2.apk framework files in order to properly decode this application. Finding Frameworks For the most part any apk in /system/framework on a device will be a framework file. On some devices they might reside in /data/system-framework and even cleverly hidden in /system/app or /system/priv-app. They are usually named with the naming of "resources", "res" or "framework". Example HTC has a framework called com.htc.resources.apk, LG has one called lge-res.apk After you find a framework file you could pull it via adb pull /path/to/file or use a file manager application. After you have the file locally, pay attention to how Apktool installs it. The number that the framework is named during install corresponds to the pkgId of the application. These values should range from 1 to 9. Any APK that installs itself as 127 is 0x7F which is an internal pkgId. Internal Frameworks Apktool comes with an internal framework like mentioned above. This file is copied to $HOME/apktool/framework/1.apk during use. Warning Apktool has no knowledge of what version of framework resides there. It will assume its up to date, so delete the file during Apktool upgrades Managing framework files Frameworks are stored in $HOME/apktool/framework for Windows and Unix systems. Mac OS X has a slightly different folder location of $HOME/Library/apktool/framework. If these directories are not available it will default to java.io.tmpdir which is usually /tmp. This is a volatile directory so it would make sense to take advantage of the parameter --frame-path to select an alternative folder for framework files. Note Apktool has no control over the frameworks once installed, but you are free to manage these files on your own. Tagging framework files Frameworks are stored in the naming convention of: <id>-<tag>.apk. They are identified by pkgId and optionally custom tag. Usually tagging frameworks isn't necessary, but if you work on apps from many different devices and they have incompatible frameworks, you will need some way to easily switch between them. You could tag frameworks by: $ apktool if com.htc.resources.apk -t hero I: Framework installed to: /home/brutall/apktool/framework/2-hero.apk $ apktool if com.htc.resources.apk -t desire I: Framework installed to: /home/brutall/apktool/framework/2-desire.apk Then: $ apktool d HtcContacts.apk -t hero I: Loading resource table... I: Decoding resources... I: Loading resource table from file: /home/brutall/apktool/framework/1.apk I: Loading resource table from file: /home/brutall/apktool/framework/2-hero.apk I: Copying assets and libs... $ apktool d HtcContacts.apk -t desire I: Loading resource table... I: Decoding resources... I: Loading resource table from file: /home/brutall/apktool/framework/1.apk I: Loading resource table from file: /home/brutall/apktool/framework/2-desire.apk I: Copying assets and libs... You don't have to select a tag when building apk - apktool automatically uses the same tag, as when decoding. Smali Debugging Warning SmaliDebugging has been marked as deprecated in 2.0.3, and removed in 2.1. Please check SmaliIdea for a debugger. Apktool makes possible to debug smali code step by step, watch variables, set breakpoints, etc. General information Generally we need several things to run Java debugging session: debugger server (usually Java VM) debugger client (usually IDE like IntelliJ, Eclipse or Netbeans) client must have sources of debugged application server must have binaries compiled with debugging symbols referencing these sources sources must be java files with at least package and class definitions, to properly connect them with debugging symbols In our particular situation we have: server: Monitor (Previously DDMS), part of Android SDK, standard for debugging Android applications - explained here client: any JPDA client - most of decent IDEs have support for this protocol. sources: smali code modified by apktool to satisfy above requirements (".java" extension, class declaration, etc.). Apktool modifies them when decoding apk in debug mode. binaries: when building apk in debug mode, apktool removes original symbols and adds new, which are referencing smali code (line numbers, registers/variables, etc.) Info To successfully run debug sessions, the apk must be both decoded and built in debug mode. Decoding with debug decodes the application differently to allow the debug rebuild option to inject lines allowing the debugger to identify variables and types.-d / --debug General instructions Above information is enough to debug smali code using apktool, but if you aren't familiar with DDMS and Java debugging, then you probably still don't know how to do it. Below are simple instructions for doing it using IntelliJ or Netbeans. Decode apk in debug mode: $ apktool d -d -o out app.apk Build new apk in debug mode: $ apktool b -d out Sign, install and run new apk. Follow sub-instructions below depending on IDE. IntelliJ (Android Studio) instructions In IntelliJ add new Java Module Project selecting the "out" directory as project location and the "smali" subdirectory as content root dir. Run Monitor (Android SDK /tools folder), find your application on a list and click it. Note port information in last column - it should be something like "86xx / 8700". In IntelliJ: Debug -> Edit Configurations. Since this is a new project, you will have to create a Debugger. Create a Remote Debugger, with the settings on "Attach" and setting the Port to 8700 (Or whatever Monitor said). The rest of fields should be ok, click "Ok". Start the debugging session. You will see some info in a log and debugging buttons will show up in top panel. Set breakpoint. You must select line with some instruction, you can't set breakpoint on lines starting with ".", ":" or "#". Trigger some action in application. If you run at breakpoint, then thread should stop and you will be able to debug step by step, watch variables, etc. Netbeans instructions In Netbeans add new Java Project with Existing Sources, select "out" directory as project root and "smali" subdirectory as sources dir. Run DDMS, find your application on a list and click it. Note port information in last column - it should be something like "86xx / 8700". In Netbeans: Debug -> Attach Debugger -> select JPDA and set Port to 8700 (or whatever you saw in previous step). Rest of fields should be ok, click "Ok". Debugging session should start: you will see some info in a log and debugging buttons will show up in top panel. Set breakpoint. You must select line with some instruction, you can't set breakpoint on lines starting with ".", ":" or "#". Trigger some action in application. If you run at breakpoint, then thread should stop and you will be able to debug step by step, watch variables, etc. Limitations/Issues Because IDE doesn't have full sources, it doesn't know about class members and such. Variables watching works because most of data could be read from memory (objects in Java know about their types), but if for example, you watch an object and it has some nulled member, then you won't see, what type this member is. 9Patch Images Docs exist for the mysterious 9patch images here and there. (Read these first). These docs though are meant for developers and lack information for those who work with already compiled 3rd party applications. There you can find information how to create them, but no information about how they actually work. I will try and explain it here. The official docs miss one point that 9patch images come in two forms: source & compiled. source - You know this one. You find it in the source of an application or freely available online. These are images with a black border around them. compiled - The mysterious form found in apk files. There are no borders and the 9patch data is written into a binary chunk called npTc. You can't see or modify it easily, but Android OS can as its quicker to read. There are problems related to the above two points. You can't move 9patch images between both types without a conversion. If you try and unpack 9patch images from an apk and use it in the source of another, you will get errors during build. Also vice versa, you cannot take source 9patch images directly into an apk. 9patch binary chunk isn't recognized by modern image processing tools. So modifying the compiled image will more than likely break the npTc chunk, thus breaking the image on the device. The only solution to this problem is to easily convert between these two types. The encoder (which takes source to compiled) is built into the aapt tool and is automatically used during build. This means we only need to build a decoder which has been in apktool since v1.3.0 and is automatically ran on all 9patch images during decode. So if you want to modify 9patch images, don't do it directly. Use apktool to decode the application (including the 9patch images) and then modify the images. At that point when you build the application back, the source 9patch images will be compiled. Other FAQ What about the -j switch shown from the original YouTube videos? Read Issue 199. In short - it doesn't exist. Is it possible to run apktool on a device? Sadly not. There are some incompatibilities with SnakeYAML, java.nio and aapt Where can I download sources of apktool? From our Github or Bitbucket project. Resulting apk file is much smaller than original! Is there something missing? There are a couple of reasons that might cause this. Apktool builds unsigned apks. This means an entire directory META-INF is missing. New aapt binary. Newer versions of apktool contain a newer aapt which optimizes images differently. These points might have contributed to a smaller than normal apk There is no META-INF dir in resulting apk. Is this ok? Yes. META-INF contains apk signatures. After modifying the apk it is no longer signed. You can use -c / --copy-original to retain these signatures. However, using -c uses the original AndroidManifest.xml file, so changes to it will be lost. What do you call "magic apks"? For some reason there are apks that are built using modified build tools. These apks don't work on a regular AOSP Android build, but usually are accompanied by a modified system that can read these modified apks. Apktool cannot handle these apks, therefore they are "magic". Could I integrate apktool into my own tool? Could I modify apktool sources? Do I have to credit you? Actually the Apache License, which apktool uses, answers all these questions. Yes you can redistribute and/or modify apktool without my permission. However, if you do it would be nice to add our contributors (brut.all, iBotPeaches and JesusFreke) into your credits but it's not required. Where does apktool store its framework files? unix - $HOME/.local/share/apktool mac - $HOME/Library/apktool windows - $HOME/AppData/Local/apktool Options Utility Options that can be executed at any time. -version, --version Outputs current version. (Ex: 1.5.2) -v, --verbose Verbose output. Must be first parameter -q, --quiet Quiet output. Must be first parameter -advance, --advanced Advance usage output Decode These are all the options when decoding an apk. --api <API> The numeric api-level of the smali files to generate (defaults to targetSdkVersion) -b, --no-debug-info Prevents baksmali from writing out debug info (.local, .param, .line, etc). Preferred to use if you are comparing smali from the same APK of different versions. The line numbers and debug will change among versions, which can make DIFF reports a pain. -f, --force Force delete destination directory. Use when trying to decode to a folder that already exists --keep-broken-res - Advanced If there is an error like "Invalid Config Flags Detected. Dropping Resources...". This means that APK has a different structure then Apktool can handle. This might be a newer Android version or a random APK that doesn't match standards. Running this will allow the decode, but then you have to manually fix the folders with -ERR in them. -m, --match-original - Used for analysis Matches files closest as possible to original, but prevents rebuild. -o, --output <DIR> The name of the folder that apk gets written to -p, --frame-path <DIR> The folder location where framework files should be stored/read from -r, --no-res This will prevent the decompile of resources. This keeps the resources.arsc intact without any decode. If only editing Java (smali) then this is the recommend for faster decompile & rebuild -s, --no-src This will prevent the disassemble of the dex files. This keeps the apk classes.dex file and simply moves it during build. If your only editing the resources. This is recommended for faster decompile & rebuild -t, --frame-tag <TAG> Uses framework files tagged via <TAG> Rebuild These are all the options when building an apk. -a, --aapt <FILE> Loads aapt from the specified file location, instead of relying on path. Falls back to $PATH loading, if no file found -c, --copy-original - Will still require signature resign post API18 Copies original AndroidManifest.xml and META-INF folder into built apk -d, --debug Adds debuggable="true" to AndroidManifest file. -f, --force-all Overwrites existing files during build, reassembling the resources.arsc file and classes.dex file -o, --output <FILE> The name and location of the apk that gets written -p, --frame-path <DIR> The location where framework files are loaded from
au3反编译源码 myAut2Exe - The Open Source AutoIT Script Decompiler 2.9 ======================================================== *New* full support for AutoIT v3.2.6++ :) ... mmh here's what I merely missed in the 'public sources 3.1.0' This program is for studying the 'Compiled' AutoIt3 format. AutoHotKey was developed from AutoIT and so scripts are nearly the same. Drag the compiled *.exe or *.a3x into the AutoIT Script Decompiler textbox. To copy text or to enlarge the log window double click on it. Supported Obfuscators: 'Jos van der Zande AutoIt3 Source Obfuscator v1.0.14 [June 16, 2007]' , 'Jos van der Zande AutoIt3 Source Obfuscator v1.0.15 [July 1, 2007]' , 'Jos van der Zande AutoIt3 Source Obfuscator v1.0.20 [Sept 8, 2007]' , 'Jos van der Zande AutoIt3 Source Obfuscator v1.0.22 [Oct 18, 2007]' , 'Jos van der Zande AutoIt3 Source Obfuscator v1.0.24 [Feb 15, 2008]' , 'EncodeIt 2.0' and 'Chr() string encode' Tested with: AutoIT : v3. 3. 0.0 and AutoIT : v2.64. 0.0 and AutoHotKey: v1.0.48.5 The options: =========== 'Force Old Script Type' Grey means auto detect and is the best in most cases. However if auto detection fails or is fooled through modification try to enable/disable this setting 'Don't delete temp files (compressed script)' this will keep *.pak files you may try to unpack manually with'LZSS.exe' as well as *.tok DeTokeniser files, tidy backups and *.tbl (<-Used in van Zande obfucation). If enable it will keep AHK-Scripts as they are and doesn't remove the linebreaks at the beginning Default:OFF 'Verbose LogOutput' When checked you get verbose information when decompiling(DeTokenise) new 3.2.6+ compiled Exe Default:OFF 'Restore Includes' will separated/restore includes. requires ';<AUT2EXE INCLUDE-START' comment to be present in the script to work Default:ON 'Use 'normal' Au3_Signature to find start of script' Will uses the normal 16-byte start signature to detect the start of a
EurekaLog 7.5 (18-August-2016) 1)..Important: Installation layout was changed. All packages now have version suffix (e.g. EurekaLogCore240.bpl). No files are copied to \bin folder of IDE. Run-time package (EurekaLogCore) is copied to Windows\System32 folder. Refer to help for more info. 2)....Added: RAD Studio 10.1 Berlin support 3)....Added: IDE F1 help integration (on CHM-based IDEs only, i.e. XE8+) 4)....Added "--el_injectjcl", "--el_createjcl", and "--el_createdbg" command-line options for ecc32/emake to inject JEDI/JCL debug info, create .jdbg file, and create .dbg file (Microsoft debug format). Later is supported when map2dbg.exe tool is placed in \Bin folder of EurekaLog installation (separate download is required) 5)....Added: Exception2HRESULT in EAppDLL to simplify developing DLLs with "DLL" profile 6)....Added: Use ShellExecute option for mailto send method 7)....Added: "Mandatory e-mail only when sending" option 8)....Added: Exception line highlighting in disassember view in EurekaLog exception dialog and Viewer 9)....Added: Detection/logging Delphi objects in disassembly view 10)..Added: Support for multi-monitor info 11)..Added: Support for detection of Windows 10 updates 12)..Added: OS edition detection 13)..Added: "User" and "Session" columns to processes list, processes list is also sorted by session first 14)..Added: Support for showing current user processes only 15)..Added: Expanding environment variables for "Support URL" 16)..Fixed: Range-check error on systems with MBCS ACP 17)..Fixed: 64-bit shared memory manager may not work 18)..Fixed: Possible "Unit XYZ was compiled with a different version of ABC" when using packages 19)..Fixed: FastMM shared MM compatibility 20)..Fixed: Minor bugs in stack tracing (which usually affected stacks for leaks) 21)..Fixed: Rare deadlocks in multi-threaded applications 22)..Fixed: Taking screenshot of minimized window 23)..Fixed: NT service may not log all exceptions 24)..Fixed: SSL port number for Bugzilla 25)..Fixed: Disabling "Activate Exception Filters" option was ignored 26)..Fixed: Missing FTP proxy settings 27)..Fixed: IntraWeb support is updated up to 14.0.64 28)..Fixed: Retrieving some process paths in processes list 29)..Fixed: CPU view rendering in EurekaLog exception dialog and Viewer 30)..Fixed: Some issues in naming threads 31)..Fixed: Removed exported helper _462EE689226340EAA982C5E8307B3F9E function (replaced with mapped file) 32)..Changed: Descriptions of EurekaLog project options now list corresponding property names of TEurekaModuleOptions class. 33)..Changed: Default template of HTML/web dialog now includes call stack by default 34)..Changed: EurekaLog 7 now can be installed over EurekaLog 6 automatically, with no additional actions/tools EurekaLog 7.4 (7.4.0.0), 26-January-2016 1)....Fixed: Performance issue in DLL exports debug information provider 2)....Fixed: Range-check error in Send dialog 3)....Fixed: Possible FPU control word unexpected change 4)....Fixed: JIRA sending to project with no version info 5)....Fixed: Viewer sorting affected by local region settings 6)....Fixed: Exception filters ignore settings for restart/terminate EurekaLog 7.3 Hotfix 2 (7.3.2.0), 20-October-2015 1)....Fixed: Added workaround for codegen bug in Delphi 7 (possibly - other), bug manifests itself as wrong date-time in reports or integer overflows 2)....Fixed: Some MAPI DLLs may not be loaded correctly 3)....Fixed: Handling SEC_I_INCOMPLETE_CREDENTIALS in SSPI code (added searching client certificate) 4)....Fixed: Range-check error when closing WinAPI dialog EurekaLog 7.3 Hotfix 1 (7.3.1.0), 2-October-2015 1)....Fixed: Long startup time on terminal services servers EurekaLog 7.3 (7.3.0.0), 24-September-2015 1)....Added: RAD Studio 10 Seattle support 2)....Added: Performance counters for run-time (internal logging with --el_debug) 3)....Fixed: spawned by ecc32/emake processes now start with the same priority 4)....Fixed: ThreadID = 0 in StandardEurekaNotify 5)....Fixed: Dialog auto-close timer may reset without user input 6)....Fixed: Possible hang when quickly loading/unloading EurekaLog-enabled DLL 7)....Fixed: Possible hang in COM DLLs 8)....Fixed: Removed some unnecessary file system access on startup 9)....Fixed: Possible wrong font size in EurekaLog tools 10)..Fixed: Ignore timeouts from Shell_NotifyIcon 11)..Fixed: Possible failure to handle/process stack overflow exceptions 12)..Changed: VCL/CLX/FMX now will assign Application.OnException handler when low-level hooks are disabled EurekaLog 7.2 Hotfix 6 (7.2.6.0), 14-July-2015 1)....Added: csoCaptureDelphiExceptions option 2)....Fixed: Handling of SECBUFFER_EXTRA in SSPI code 3)....Fixed: Several crashes in sending code for very old Delphi versions 4)....Fixed: Regression (from hotfix 5) crash in some IDEs EurekaLog 7.2 Hotfix 5 (7.2.5.0), 1-July-2015 1)....Added: HKCU\Software\EurekaLab\Viewer\4.0\UI\Statuses registry key to allow status customizations in Viewer 2)....Added: "Disable hang detection under debugger" option 3)....Fixed: Wrong button caption in standalone "Steps to reproduce" dialog 4)....Fixed: Wrong passing of Boolean parameters in JSON (affects JIRA) 5)....Fixed: Wrong sorting of BugID, Count and DateTime columns in Viewer 6)....Fixed: Empty "Count" field/column is now displayed as "1" in Viewer 7)....Fixed: Generic names with "," could not be decoded in Viewer 8)....Fixed: Updated Windows 10 detection for latest builds of Windows 10 9)....Fixed: Sleep and hybernation no longer trigger false-positive "application freeze" 10)..Fixed: Wrong function codes for hooking (affects ISAPI application type) 11)..Fixed: Wrong button caption in "Steps to Reproduce" dialog 12)..Fixed: Crash when taking snapshot of some proccesses by Threads Snapshot tool 13)..Fixed: Minor improvements in leak detection EurekaLog 7.2 Hotfix 4 (7.2.4.0), 10-June-2015 1)....Added "ECC32TradeSpeedForMemory" option - defaults to 0/False, could be changed to 1 via Custom/Manual tab. This option will switch from fast-methods to slower methods, but which take less memory. Use 0 (default) for small projects, use 1 for large projects (if ecc32 runs out of memory). 2)....Added: --el_DisableDebuggerPresent command-line option for compatibility with 3rd party debuggers (AQTime, etc.) 3)....Added: AQTime auto-detect 4)....Fixed: Performance optimizations 5)....Fixed: Windows 8+ App Menu shortcuts 6)....Fixed: Unmangling on x64 EurekaLog 7.2 Hotfix 3 (7.2.3.0), 20-May-2015 1)....Added: Support for token auth in Bugzilla (latest 4.x builds) 2)....Added: Support for API key auth in Bugzilla (5.x) 3)....Added: Support for /EL_DisableMemoryFilter command-line option 4)....Added: Asking e-mail when user switches to "details" from MS Classic without entering e-mail 5)....Fixed: Compatibility issues with older Bugzilla versions (3.x) 6)....Fixed: Passing settings between dialogs 7)....Fixed: "Ask for steps to reproduce" dialog is now DPI-aware 8)....Fixed: Silently ignore and fix invalid values in project options EurekaLog 7.2 Hotfix 2 (7.2.2.0), 30-April-2015 1)....Fixed: Confusing message in Manage tool when using with Trial/Pro 2)....Fixed: Range check error in processes information for x64 machines (affects startup of any EurekaLog-enabled module) 3)....Fixed: Auto-detect personality by project extension if --el_mode switch is missing 4)....Fixed: More details for diagnostic sending 5)....Fixed: Wrong settings for MAP files in C++ Builder 6)....Fixed: Wrong code page was used to decode ANSI bug reports 7)....Fixed: Attaching .PAS files instead of .OBJ in C++ Builder 2006+ Pro/Trial EurekaLog 7.2 Hotfix 1 (7.2.1.0), 3-April-2015 1)....Fixed: Wrong float-str convertion when ThousandSeparator is '.' EurekaLog 7.2 (7.2.0.0), 1-April-2015 1)....Important: TEurekaLogV7 component was renamed to TEurekaLogEvents. Please, update your projects by renaming or recreating the component 2)....Important: File layout was changed for BDS 2006+. Delphi and C++ Builder files are now located in StudioNum folders instead of old DelphiNum and CBuilderNum folders. Update your search paths if needed 3)....Added: Major improvements in DumpAllocationsToFile function (EMemLeaks unit) 4)....Added: MemLeaksSetParentBlock, MemLeaksOwn, EurekaTryGetMem functions (EMemLeaks unit) 5)....Added: Improvements for call stack of dynarrays/strings allocations (leaks) 6)....Added: "Elem size" when reporting leaks in dynarrays 7)....Added: Streaming unpacked debug info into temporal files instead of memory - this greatly reduces run-time application memory usage at cost of slightly slower exception processing. This also reduces memory footprint for ecc32/emake 8)....Added: Showing call stacks for 2 new types of fatal memory errors 9)....Added: EMemLeaks._ReserveOutOfMemory to control reserve size of out of memory errors (default is 50 Mb) 10)..Added: "MinLeaksLimitObjs" option (EMemLeaks unit) 11)..Added: Fatal memory problem now pauses all threads in application 12)..Added: Fatal memory problem now change thread name (to simplify debugging) 13)..Added: boPauseELThreads and boDoNotPauseELServiceThread options (currently not visible in UI) 14)..Added: Support for texts collections out of default path 15)..Added: Support for relative file paths to text collections and external settings 16)..Added: Support for environment variables in project option's paths 17)..Added: Support for relative file paths and environment variables for events and various module paths 18)..Added: Logging in Manage tool 19)..Added: Windows 10 version detection 20)..Added: Stack overflow tracing 21)..Added: Major improvements in removal of recursive areas from call stack 22)..Added: Statistics collection 23)..Added: Support for uploading multiple files in JIRA 24)..Added: EResLeaks improvements (new funcs: ResourceAdd, ResourceDelete, ResourceName; support for realloc-like functions) 25)..Fixed: Added workaround for bug in JIRA 5.x 26)..Fixed: Rare EurekaLog internal error 27)..Fixed: Ignored unhandled thread exceptions (when EurekaLog is disabled) now triggers default OS processing (WER) 28)..Fixed: Irnored exceptions (via per-exception/events) now bring up default RTL handler 29)..Fixed: Format error in Viewer 30)..Fixed: Leak of EurekaLog exception information object 31)..Fixed: Wrong chaining exceptions inside GetMem/FreeMem 32)..Fixed: Memory leak after low-level unhook of function 33)..Fixed: Re-parenting after ReallocMem 34)..Fixed: Editing SMTP server options 35)..Fixed: SMTP server not using real user e-mail in FROM field 36)..Fixed: Some multi-threading crashes 37)..Fixed: Fixed crashes in Manage tool 38)..Fixed: Range-check error in Viewer 39)..Fixed: EurekaLog error dialog appearing under other windows 40)..Fixed: AV when parsing TDS (emake/C++ Builder specific) 41)..Fixed: Unable to build call stacks for other threads due to insufficient rights 42)..Fixed: Version checks for BugZilla and JIRA 43)..Fixed: Not catching out-of-module AVs when "Capture exceptions only from current module" option is checked 44)..Fixed: Checking for remaining exceptions at shutdown (C++ Builder specific, AcquireExceptionObject returns wrong info) 45)..Fixed: "get call stack of ... threads" / "suspend ... threads" options (avoid rare multithreading race conditions) 46)..Fixed: Crash when naming thread without EurekaLog thread info 47)..Fixed: Detection of immediate caller for memory funcs 48)..Fixed: Non-working Assign for options 49)..Fixed: Handling of explicitly chained exceptions 50)..Fixed: Various exception/threading fixes for MS debug provider 51)..Fixed: Processing hardware unhandled exceptions (QC #55007) 52)..Fixed: Unchecking dialog options when export/import 53)..Fixed: BSTR leak 54)..Fixed: JIRA decimal separator bug 55)..Changed: Now unhandled exceptions will be handled by EurekaLog even if EurekaLog is disabled in the thread - only global EurekaLog-enabled status is respected 56)..Changed: Viewer version now matches version of EurekaLog 57)..Changed: DeleteServiceFilesOption now always False by default 58)..Changed: Speed improvements for known memory leaks (reserved leaks) 59)..Changed: Improved logging for sending 60)..Changed: Switching to detailed mode without entering (mandatory) e-mail: now EL will not block this 61)..Changed: .ToString for exception info now uses compact stack formatter 62)..Removed: Custom field editor (replaced it with link to "Custom" page) 63)..Removed: EurekaLog 7 no longer could be installed over EurekaLog 6. Manage tool from EurekaLog 7 will no longer work with EurekaLog 6. EurekaLog 7.1 update 1 (7.1.1.0), 19-October-2014 1)....Added: "Send in separated thread" option 2)....Added: Hang detection will now use Wait Chain Traversal (WCT) on Vista+ systems to detect deadlocks in any EurekaLog-enabled threads 3)....Added: OS install language and UI language fields in bug report 4)....Fixed: Viewer is not able to decrypt reports with generics 5)....Fixed: EVariantTypeCastError in Viewer when changing status of some bug reports 6)....Fixed: EcxInvalidDataControllerOperation in Viewer 7)....Fixed: Stack overflow at run-time for certain combination of project options 8)....Fixed: BMP re-draw bug in UI dialogs 9)....Fixed: Rogue "corrupted" error message for valid ZIPs of certain structure 10)..Fixed: Various range check errors in Viewer 11)..Fixed: Possible encoding errors for non-ASCII reports in Viewer on certain environments 12)..Fixed: Wrong count in Viewer when importing reports without proper "count" field 13)..Fixed: Duplicate reports may appear in bug report file when "Do not save duplicate errors" option is checked 14)..Fixed: False-positive detection of some virtual machines 15)..Fixed: Processing of exceptions from message handlers during message pumping cycle inside exception dialogs 16)..Fixed: Access Violation if exception dialog was terminated by exception 17)..Fixed: Hardware exceptions from unit's initialization/finalization may be unprocessed 18)..Changed: "VIEW" action for Viewer now will open ALL bug reports inside bug report file; reports will not be merged by BugID. "IMPORT" action remains the same: duplicate reports are merged, "count" is increased 19)..Changed: Charset field in bug report now shows both charset and code page EurekaLog 7.1 (7.1.0.00), 23-September-2014 1)....Added: XE7 support 2)....Added: XE6 support 3)....Added: New DLL demo 4)....Added: Custom profiles are now shown in "Application type" combo-box 5)....Added: Non-empty "steps to reproduce" will be added to existing bug tracker issues with empty "steps to reproduce" 6)....Added: Support for custom fields in FogBugz (API version 8 and above) 7)....Added: Support for unsequenced line numbers in PDB/DBG files (--el_source switch) 8)....Fixed: XML bug report were generated wrong 9)....Fixed: Strip relocations code for Win64 10)..Fixed: EurekaLog conditional symbols removed improperly when deactivating EurekaLog 11)..Fixed: Sending reports to non-default port numbers (affects web-based methods) 12)..Fixed: SSL validation check may reject valid SSL certificate (SMTP Client/Server) 13)..Fixed: SSL errors may be not reported 14)..Fixed: Viewer did not consider empty bug reports as corrupted 15)..Fixed: "DLL" profile now can be used with packages properly 16)..Fixed: Few rare memory leaks 17)..Fixed: Possible deadlock when using MS debug info provider 18)..Fixed: C++ Builder project files was saved incorrectly (RAD Studio 2007+) 19)..Fixed: "Show restart checkbox after N errors" counts handled exceptions 20)..Fixed: IDE expert's DPR parser (added support for multi-part idents) 21)..Fixed: Rare access violation in hook code 22)..Fixed: Thread handle leaks (added _NotifyThreadGone/_CleanupFinishedThreads functions to be called manually - only when low-level hooks are not installed) 23)..Fixed: EurekaLog's installer hang 24)..Fixed: Bug in object/class validation 25)..Fixed: Bug when using TThreadEx without EurekaLog 26)..Fixed: Leaks detection may not work with certain combination of options 27)..Fixed: Deadlock in some cases when using EurekaLog threading option set to "enabled in RTL threads, disabled in Windows threads". 28)..Changed: TEurekaExceptionInfo.CallStack will be nil until exception is actually raised 29)..Changed: FogBugz and BugZilla: changed bugs identification within project (to allow two bugs exists with same BugID in different projects) 30)..Changed: Blocked manual creation/destruction of ExceptionManager class and EurekaExceptionInfo 31)..Changed: ECC32/EMAKE runs from IDE without changing priority, added ECC32PriorityClass option 32)..Improved: Minor help and text improvements EurekaLog 7.0.07 Hotfix 2 (7.0.7.2), 11-December-2013 1)....Fixed: Delphi compiler code generation bug (Delphi 2007 and below) 2)....Fixed: Code hooks may rarely be set incorrectly (code stub relocation fails) 3)....Fixed: Win64 call stacks functions now work more similar to 32 bit call stacks EurekaLog 7.0.07 Hotfix 1 (7.0.7.1), 2-December-2013 1)....Added: Alternative caption for e-mail input control when e-mail is mandatory 2)....Fixed: Rare range check error in WinAPI visual dialogs 3)....Fixed: Wrong error detection for OnExceptionError event 4)....Fixed: Wrong TResponce processing 5)....Fixed: Problems with encrypted call stack decoding 6)....Fixed: OnPasswordRequest event may have no effect EurekaLog 7.0.07 (7.0.7.0), 25-November-2013 1)....Added: Ability to use Assign between call stack and TStrings 2)....Added: 64-bit disassembler 3)....Added: Support for variables and relative file paths in "Additional Files" send option 4)....Added: --el_source switch for ecc32/emake compilers 5)....Added: support for post-processing non-Embarcadero executables 6)....Added: EOTL.pas unit for better OmniThreadLibrary integration 7)....Added: RAD Studio XE5 support 8)....Added: New "Capture call stacks of EurekaLog-enabled threads" option 9)....Added: "Deferred call stacks" option for 64-bit 10)..Added: Copy report to clipboard now copies both report text and report file 11)..Added: "AttachBothXMLAndELReports" option to include both .elx and .el files into bug report 12)..Added: EMemLeaks.MemLeaksErrorsToIgnore option to exclude certain memory errors from being considered as fatal 13)..Added: Call stack with any encrypted entry will be fully encrypted now 14)..Added: Option to exclude certain memory errors from being considered as fatal (EMemLeaks.MemLeaksErrorsToIgnore) 15)..Added: New "HTTP Error Code" option for all web-based dialogs (CGI, ISAPI, etc.) 16)..Added: Support for Unicode in Simple MAPI send method (requires Windows 8 or latest Microsoft Office) 17)..Added: New value for call stack detalization option (show any addresses, including those not belonging to any executable module) 18)..Fixed: Wrong JSON escaping for strings (affects JIRA send method) 19)..Fixed: Range-check error in Viewer when viewing bug reports with high addresses 20)..Fixed: Selecting Win32 service application type is no longer resets to custom/unsupported 21)..Fixed: Possible hang when testing dialogs from EurekaLog project options dialog 22)..Fixed: Rare resetting of some options when saving .eof file 23)..Fixed: Exception pointer could be removed from call stack due to debug details filtering 24)..Fixed: Rare case when LastThreadException returned nil while there was active thread exception 25)..Fixed: Rare case when ShowLastThreadException do nothing 26)..Fixed: Improved compatibility for OmniThreadLibrary and AsyncCalls 27)..Fixed: Included fix for QC #72147 28)..Fixed: 64-bit MS Debug Info Provider (please, re-setup cache options using configuration dialog) 29)..Fixed: "Deferred call stacks" option failed to capture call stack when exception is re-raised between threads 30)..Fixed: "Deferred call stacks" option may produce cutted call stack in rare cases 31)..Fixed: Several minor call stacks improvements and optimizations 32)..Fixed: Several 64-bit Pointer Integer convertion issues 33)..Fixed: Multi-threading deadlock issue 34)..Fixed: Black screenshots in 64 bit applications 35)..Fixed: Copying to clipboard hot-key was registered globally 36)..Fixed: Shell (mailto) send method may fail (64 bit) 37)..Fixed: Possible wrong file paths for attaches in (S)MAPI send methods 38)..Fixed: Environment variables were not expanded in MAPI send method 39)..Fixed: (non-Unicode IDE) EurekaLog is not activated when application started from folder with Unicode characters 40)..Fixed: Encrypted call stacks may be encrypted partially by EurekaLog Viewer in rare cases 41)..Fixed: Crash when sending leak report with visual progress dialog (only some IDEs are affected) 42)..Fixed: ecc32/emake could not see external configuration file with the same name as project (e.g. Project1.eof for Project1.dpr) 43)..Fixed: Added missed RTL implementation for ExternalProps in Delphi 6 (affects Mantis sending) 44)..Fixed: IDE crash when switching to threads window 45)..Changed: Removed temporal solution which was used before option to defer call stack creation was introduced 46)..Changed: "Default EurekaLog state in new threads" option is changed from Boolean flag into enum. You need to re-setup this option 47)..Changed: Disable EurekaLog for thread when creating call stack or handle exception - this increases stability and performance 48)..Changed: LastException property is remove from exception manager as not thread safe. Use LastThreadException property instead 49)..Changed: Lock/Unlock from thread manager and exception manager are removed to avoid deadlocks 50)..Changed: ThreadsSnapshot tool now tries to capture call stack without injecting DLL 51)..Changed: Build events now runs with CREATE_NO_WINDOW flag (console window is hidden) 52)..Improved: More articles in help EurekaLog 7.0.06 (7.0.6.0), 1-June-2013 1)....Added: Experimental 64 bit C++ Builder support 2)....Added: New tab in EurekaLog project options: "External tools" 3)....Added: Option to catch all IDE errors (to debug your own IDE packages) 4)....Added: Option to catch only exceptions from current module 5)....Added: Option to defer building call stack 6)....Added: RAD Studio XE4 support 7)....Added: Support for AppWave 8)....Fixed: Fixed event handlers declarations for the EurekaLog component 9)....Fixed: Infinite recursive calls when using ToString from EndReport event handler 10)..Fixed: UPX compatibility issue 11)..Fixed: Range check errors for system error codes 12)..Fixed: Rare IDE stack overflow 13)..Fixed: JIRA unit was not added automatically 14)..Fixed: EurekaLog no longer tries to check for leaks when memory manager filter is disabled 15)..Fixed: Possible deadlock on shutdown with freeze checks active 16)..Fixed: Issues with settings dialog and Win32 Service application type 17)..Fixed: ThreadSnapshot tool was not able to take snapshots of Win64 processes 18)..Fixed: WCT is disabled for leaks 19)..Fixed: TContext declarations for Win64 20)..Fixed: Check for updates now correctly sets time of last check 21)..Fixed: (Win64) Several Pointer Integer convertion errors 22)..Fixed: Internal error when exception info object was deleted while it was still used by SysUtils exception object 23)..Fixed: Semeral problems with "EurekaLog look & feel" style for EurekaLog error dialog 24)..Fixed: Using text collection resets exception filters 25)..Fixed: Rare access violation if registering event handlers is placed too early 26)..Fixed: SMTP RFC date formatting 27)..Fixed: Rare empty call stack bug 28)..Fixed: Hang detection was not working if EurekaLog was disabled in threads 29)..Fixed: AV for double-free TEncoding 30)..Changed: ecc32/emake no longer alters arguments for dcc32/make unless new options --el_add_default_options is specified 31)..Changed: Save/load options methods was moved to TEurekaModuleOptions class 32)..Changed: Saving options to EOF file now adds hidden options and removes obsolete options (only when compatibility mode is off) 33)..Changed: Compiling installed packages now silently ignores EurekaLog instead of showing "File is in use" error message 34)..Improved: More readable disk/memory sizes in bug reports 35)..Improved: More descriptive settings dialog when using external configuration 36)..Improved: ThreadSnapshot tool now aquired DEBUG priviledge for taking snapshot. This allows it to bypass security access checks when opening target process. 37)..Improved: Changed BugID default generation to include error code for OS errors and error message for DB errors 38)..Improved: Mantis API (WSDL) was updated to the latest version (1.2.14) 39)..Improved: IntraWeb compatibility (old and new versions) 40)..Improved: COM applications compatibility 41)..Improved: Build events now accept shell commands 42)..Improved: More articles in help EurekaLog 7.0.05 (7.0.5.0), 7-February-2013 1)....Added: JIRA support 2)....Added: Virtual machine detection (new field in bug reports) 3)....Fixed: "Use Main Module options" option was loading empty options for some cases 4)....Fixed: Wrong record declarations for Simple MAPI on Win64 5)....Fixed: Performance issues with batch module options updating 6)....Fixed: Wrong leaks report with both MemLeaks/ResLeaks options active 7)....Fixed: Wrong info for nested exceptions in some cases 8)....Fixed: AV under debugger for Win64 (added support for _TExitDllException) 9)....Fixed: Wrong record declarations for process/thread info on Win64 10)..Fixed: Support for FinalBuilder on XE2/XE3 with spaces in file paths 11)..Fixed: Rare double-free of module information (ModuleInfoList) 12)..Fixed: Rare External Exception C000071C on shutdown (only under debuggger) 13)..Fixed: Added large addresses support in Viewer 14)..Fixed: Counter options in memory leaks category is now working properly 15)..Fixed: Rare range-check error in TEurekaModulesList.AddModuleFromFileName 16)..Fixed: FTP force directories dead lock 17)..Fixed: Fixed wrong index being used when clearing compatibility mode (EurekaLog project options dialog) 18)..Fixed: Default thread state do not affect main thread now 19)..Fixed: Sometimes wrong thread may be used when altering EurekaLog active state for external thread 20)..Fixed: Wrong DNS lookup on ANSI 21)..Fixed: Problems with IDE expert and projects on network paths 22)..Fixed: Added support for arguments in URLs (HTTP sending) 23)..Fixed: Possible deadlock in multithreaded applications 24)..Fixed: Problems with unicode characters in project files on non-Unicode IDEs 25)..Fixed: Infinite recursive calls when using ToString from EndReport event handler 26)..Fixed: Win64 GetCaller now returns pointer to call instruction, not return address 27)..Improved: Standalone Editor do not force save/load folder by default 28)..Improved: DLL profile now can use additional application type hooks automatically 29)..Improved: EurekaLog now able to work with read-only projects (see help for more info) EurekaLog 7.0.04 (7.0.4.0), 2-December-2012 1)....Added: Support for nested exceptions in DLLs 2)....Fixed: Options bug in EurekaLogSendEmail function 3)....Fixed: Weird behaviour for steps to reproduce and custom fields 4)....Fixed: Installation for single personality (BDS) 5)....Fixed: Range check error in EModules 6)....Fixed: Bug in exception destroy hook 7)....Fixed: OnExceptionNotify event is no longer called for handled exceptions without option checked 8)....Fixed: DEP checks on startup no longer cause exception 9)....Fixed: Invalid declaration for MS Debug API 10)..Fixed: OLE mode change error for "Test" send button 11)..Fixed: Fixes for multiply loading of the same DLL 12)..Fixed: Removed PNG compression from icons (tools) 13)..Fixed: Range-check error in dialogs with EurekaLog style enabled 14)..Fixed: Send progress dialog may keep busy forever processing window messages (message flood from rapid application GUI updates) 15)..Fixed: Thread pausing options now work correctly 16)..Improved: New features in exception filters - marking exceptions as "expected", filtering by properties (RTTI) 17)..Improved: Recovery from memory errors without debugging memory manager 18)..Improved: Viewer's password edit now hides password with asterisks 19)..Updated: Changed names of .inc files to avoid name conflicts with other libraries 20)..Updated: Help EurekaLog 7.0.03 (7.0.3.0), 6-October-2012 1)....Fixed: Removed some consts keywords for event handlers, so now C++ Builder can alter arguments (this change may require you to adjust your custom code) 2)....Fixed: Fallback code for false-positive results on memory probing 3)....Fixed: Range check errors in SSL/TLS implementation 4)....Fixed: "EurekaLog is not active" error message during send testing 5)....Fixed: Incorrect memory probing when DEP is off (old systems) 6)....Fixed: Installation of 64-bit BPLs 7)....Fixed: Dialog preview 8)....Fixed: Win64 fixes for XE3 9)....Fixed: Support for project groups (mixed project types) 10)..Fixed: Windows 2000 hooks compatibility 11)..Fixed: mailto double quotes escaping 12)..Fixed: Simple MAPI WOW compatibility 13)..Fixed: Simple MAPI modal issues 14)..Fixed: Various range check errors 15)..Changed: Removed minor version number from program group name 16)..Updated: Help EurekaLog 7.0.02 hot-fix 1 (7.0.2.1), 12-September-2012 1)....Fixed: Range check error in Viewer 2)....Fixed: Bug in hooking code EurekaLog 7.0.02 (7.0.2.0), 11-September-2012 1)....Added: Improved memory problems detection 2)....Added: Minor IDE Expert usability improvements 3)....Added: Auto-size feature for detailed error dialog 4)....Added: Workaround for QC #106935 5)....Added: Workaround for bug in InvokeRegistry (SOAP/Mantis) 6)....Fixed: Nested OS exceptions 7)....Fixed: Multiply Win64 fixes 8)....Fixed: Compatibility mode fixes 9)....Fixed: Altered behaviour of "Add BugID/Date/ComputerName" options 10)..Fixed: Blank screenshots 11)..Fixed: Check file for corruptions 12)..Fixed: Viewer is unable to decrypt certain bug reports 13)..Fixed: Internal DoNoTouch option now works for post-processing and condtionals 14)..Fixed: Possible out of memory error for "Do not store class/procedure names" option 15)..Fixed: EurekaLog did not properly install itself when there is only Delphi installed, but no C++ Builder of the same version (or visa versa) 16)..Fixed: Wrong argument for OnRaise event 17)..Fixed: Handling memory errors in initialization/finalization sections 18)..Fixed: Updating steps to reproduce and user e-mail in bug report 19)..Fixed: Proper Success/Failure for some errors during SMTP send 20)..Added: Workaround for wrong GUI fonts 21)..Added: Delphi XE3 support 22)..Added: Individual options for each exception EurekaLog 7.0.01 (7.0.1.0), 28-June-2012 1)....Added: New "Modal window" option (MS Classic and EurekaLog dialogs) 2)....Added: New "Owned window" option (MS Classic and EurekaLog dialogs) 3)....Added: New "Catch EurekaLog IDE Expert errors" option 4)....Added: Backup memory manager to recover from critical errors 5)....Added: Alternative methods to provide additional features when memory filter is not set 6)....Fixed: Contains fixes from hotfixes 1-3 7)....Fixed: Performance improvements 8)....Fixed: Improved IDE Expert's speed, stability and compatibility with other 3rd party extensions 9)....Fixed: MS Classic dialog size adjustments for large "click here" translations 10)..Fixed: Fixed resetting few EurekaLog project options to defaults 11)..Fixed: Multiplying exception filters when options are assigned (for example: when switching to/from "Custom" page in project options) 12)..Fixed: (Compatibility mode) Fixed send options merging 13)..Fixed: Updated help EurekaLog 7.0 hot-fix 3 (7.0.0.273), 20-June-2012 --------------------------- 1)....Fixed: ERangeError in EResLeaks (THandle Integer) 2)....Fixed: C++ Builder breakpoints for large projects 3)....Fixed: Help (updates policy changed) 4)....Fixed: Text collections applying 5)....Fixed: Build events are now called for unlocked file 6)....Fixed: Proper handling of C++ Builder project options files from Delphi code (settings editor and IDE expert) 7)....Fixed: Terminate/Checked sub-option for MS Classic dialog 8)....Fixed: Confusing message for already post-processed executables 9)....Fixed: Access violation for some EurekaLog IDE menu items when no project was loaded 10)..Fixed: Invoking help for "Variables" window 11)..Fixed: EurekaLog Viewer version info 12)..Fixed: Events in components 13)..Added: Retry option for "Sorry, you must close all running IDE instances before installation" 14)..Added: Italian translation 15)..Added: Actual change log is now included into installer 16)..Added: Even more setup logging 17)..Added: New help articles (recompilation and manual installation) EurekaLog 7.0 hot-fix 2 (7.0.0.261), 10-June-2012 --------------------------- 1)....Fixed: Wrong version info reporting to IDE 2)....Added: Workaround for Delphi 2005 TListView bug 3)....Added: Workaround for possible invalid FPU state in exception handlers 4)....Added: Missed declarations for ExceptionLog (compatibility mode) 5)....Fixed: Work for unsaved projects 6)....Added: Escaping for '--' in options (confuses IDE's XML parsing) 7)....Added: Storing thread's class/name in call stack for terminated threads 8)....Added: More setup logging 9)....Fixed: Help (broken links) 10)..Added: "Upgrade to EurekaLog 7" help topic 11)..Fixed: Clean up installed files EurekaLog 7.0 hot-fix 1 (7.0.0.256), 6-June-2012 --------------------------- 1)....Fixed: Invalid Format() arguments in ELogBuilder. EurekaLog 7.0, 1-June-2012 --------------------------- 1)....Improved: Main change - EurekaLog's core was rewritten (refactored) to allow more easy modification and remove hacks. 2)....Improved: New plugin-like architecture now allows you to exclude unused code. 3)....Improved: New plugin-like architecture now allows you to easily extends EurekaLog. 4)....Improved: Greatly extended documentation. 5)....Improved: Installer is now localized. 6)....Improved: Greatly speed ups creation of minimal bug report (with most information disabled). 7)....Changed: EurekaLog's root IDE menu was relocated to under Tools and extended with new items. 8)....Added: New examples. 9)....Added: New tools (address lookup, error lookup, threads snapshot, standalone settings editor). 10)..Added: Support for DBG/PDB formats of debug information (including symbol server support and auto-downloading). 11)..Added: Support for madExcept debug information (experimental). 12)..Added: WER (Windows Error Reporting) support. 13)..Added: Full unicode support. 14)..Added: Professional and Trial editions: added source code (interface sections only) 15)..Improved: Dialogs - new options and new customization possibilities: 16)..Added: All GUI dialogs: ability to test dialog directly from configuration dialog by displaying a sample window with currently specified settings. 17)..Improved: All GUI dialogs: dialogs are DPI-awared now (auto-scale for different DPI). 18)..Added: MessageBox dialog: added detailed mode (shows a compact call stack). 19)..Added: MessageBox dialog: added ability for asking a send consent. 20)..Added: MessageBox dialog: added support to switch to "native" message box for application. 21)..Added: MS Classic dialog: added control over "user e-mail" edit's visibility. 22)..Added: MS Classic dialog: added ability to personalize dialog view with application's name and icon. 23)..Added: MS Classic dialog: added ability to show terminate/restart checkbox initially checked. 24)..Added: EurekaLog dialog: added ability to personalize dialog view with application's name and icon. 25)..Added: EurekaLog dialog: added ability to show terminate/restart checkbox initially checked. 26)..Added: EurekaLog dialog: added ability to switch back to non-detailed view. 27)..Added: WEB dialog: added new tags to customize bug report page. 28)..Improved: WEB dialog: improved support for unicode and charset. 29)..Added: New dialog type: RTL dialog. 30)..Added: New dialog type: console output. 31)..Added: New dialog type: system logging. 32)..Added: New dialog type: Windows Error Reporting. 33)..Improved: Sending - new options and new customization possibilities: 34)..Added: All send methods: added ability to setup multiply send methods. 35)..Added: All send methods: added ability to change send method order. 36)..Added: All send methods: added separate settings for each send method. 37)..Added: All send methods: ability to test send method directly from configuration dialog by sending a demo bug report. 38)..Added: SMTP client send method: added SSL support. 39)..Added: SMTP client send method: added TLS support. 40)..Added: SMTP client send method: added option for using real e-mail address. 41)..Added: SMTP server send method: added option for using real e-mail address. 42)..Added: HTTP upload send method: added support for custom backward feedback messages. 43)..Added: FTP upload send method: added creating folders on FTP (like remote ForceDirectories). 44)..Added: Mantis send method: added API support (MantisConnect, out-of-the-box since Mantis 1.1.0, available as add-on for previous versions). 45)..Added: Mantis send method: added support for custom "Count" field. 46)..Added: Mantis send method: added options for controlling duplicates. 47)..Added: Mantis send method: added support for SSL/TLS. 48)..Added: FogBugz send method: added API support (out-of-the-box since ForBugz 7, available as add-on for FogBugz 6). 49)..Added: FogBugz send method: EurekaLog will update "Occurrences" field (count of bugs). 50)..Added: FogBugz send method: EurekaLog will respect "Stop reporting" option (BugzScout's setting). 51)..Added: FogBugz send method: EurekaLog will respect "Scout message" option (BugzScout's setting). 52)..Added: FogBugz send method: EurekaLog will store client's e-mail as issue's correspondent. 53)..Added: FogBugz send method: added options for controlling duplicates. 54)..Added: FogBugz send method: added support for "Area" field. 55)..Added: FogBugz send method: added support for SSL/TLS. 56)..Added: BugZilla send method: added API support. 57)..Added: BugZilla send method: added support for custom "Count" field. 58)..Added: BugZilla send method: added options for controlling duplicates. 59)..Added: BugZilla send method: added support for SSL/TLS. 60)..Added: New send method: Shell (mailto protocol). 61)..Added: New send method: extended MAPI. 62)..Added: Support for separate code and debug info injection. 63)..Added: Ability to use custom units before EurekaLog's units. 64)..Added: Support for external configuration file in IDE expert. 65)..Added: Now EurekaLog stores only those project options which are different from defaults (to save disk space and reduce noise in project file). 66)..Added: Now EurekaLog stores project options sorted (alphabet order). 67)..Added: Separate settings for saving modules and processes lists to bug report. 68)..Added: Support for taking screenshots of multiply monitors. 69)..Added: More screenshot customization options. 70)..Added: More control over bug report's file names. 71)..Added: New environment variables. 72)..Added: Deleting .map file after compilation. 73)..Added: Support for different .dpr and .dproj file names. 74)..Improved: memory leaks detection feature - new options and new customization possibilities: 75)..Added: Ability to track memory problems without activation of leaks checking. 76)..Added: Support for sharing memory manager. 77)..Added: Support for tracking leaks in applications built with run-time packages. 78)..Added: Option to zero-fill freed memory. 79)..Added: Option to enable leaks detection only when running under debugger. 80)..Added: Option for manual activation control for leaks detection (via command-line switches). 81)..Added: Option to select stack tracing method for memory problems. 82)..Added: Option to trigger memory leak reporting only for large leaked memory's size. 83)..Added: Option to control limit of number of reported leak. 84)..Added: CheckHeap function to force check of heap's consistency. 85)..Added: DumpAllocationsToFile function to save information about allocated memory to log file. 86)..Added: Registered leaks feature. 87)..Added: Run-time control over memory leak registering. 88)..Added: New recognized leak type: String (both ANSI and Unicode are supported). 89)..Added: Memory features support for C++ Builder. 90)..Added: Resource leaks detection feature. 91)..Improved: Compilation speed increased. 92)..Added: Support for generics in debug information. 93)..Added: Chained/nested exceptions support. 94)..Added: Wait Chain Traversal support. 95)..Added: Support for named threads. 96)..Added: Additional information for threads in call stack. 97)..Improved: EurekaLog Viewer Tool: 98)..Added: Now Viewer has its own help file 99)..Added: Viewer now supports a FireBird based database on local file or remote server. 100).Added: You can have more that one user account for FireBird based database. 101).Added: Viewer now can be launched in View mode (Viewer can be configured to any DB or View mode). 102).Added: Viewer's database now supports storing files, associated with the report (you can also add and remove files manually). 103).Added: Viewer supports "Import" and "View" commands for report files. 104).Improved: Extended support for more log formats (XML, packed ELF, etc). 105).Added: Columns in report's list now can be configured (you can hide and show them). 106).Added: There are a plenty of new columns added to report's list. 107).Added: Ability of auto-download reports from e-mail account. 108).Improved: printing - now you can print the entire report (including screenshots). Old behaviour of printing just one tab (call stack only, for example) also remains. 109).Added: Viewer can now have more that one run-time instance . 110).Added: File import status dialog is now configurable (you can disable it, if you want to). 111).Added: There is a preview area for screenshots, available in reports. 112).Improved: Now Viewer is more Vista-friendly (i.e. file associations are managed in HKCU, rather that in HKLM, storing configuration in user's Application Data, etc, etc). 113).Added: Report's list now supports multi-select, so operations can be performed on many reports at time. 114).Added: There are plenty of new command line abilities, like specifying several files and new switches. 115).Improved: Bunch of minor changes and improvements. WARNING: -------- There are many changes in this release. See the "Changed from the old 6.x version" help topic for further information! EurekaLog 7 also have "EurekaLog 6 backward compatibility mode". Please, refer to help file for more information. We also have the detailed "Upgrade guide" in our help system.
Computer Networking: A Top-Down Approach, 6th Edition Solutions to Review Questions and Problems Version Date: May 2012 This document contains the solutions to review questions and problems for the 5th edition of Computer Networking: A Top-Down Approach by Jim Kurose and Keith Ross. These solutions are being made available to instructors ONLY. Please do NOT copy or distribute this document to others (even other instructors). Please do not post any solutions on a publicly-available Web site. We’ll be happy to provide a copy (up-to-date) of this solution manual ourselves to anyone who asks. Acknowledgments: Over the years, several students and colleagues have helped us prepare this solutions manual. Special thanks goes to HongGang Zhang, Rakesh Kumar, Prithula Dhungel, and Vijay Annapureddy. Also thanks to all the readers who have made suggestions and corrected errors. All material © copyright 1996-2012 by J.F. Kurose and K.W. Ross. All rights reserved Chapter 1 Review Questions There is no difference. Throughout this text, the words “host” and “end system” are used interchangeably. End systems include PCs, workstations, Web servers, mail servers, PDAs, Internet-connected game consoles, etc. From Wikipedia: Diplomatic protocol is commonly described as a set of international courtesy rules. These well-established and time-honored rules have made it easier for nations and people to live and work together. Part of protocol has always been the acknowledgment of the hierarchical standing of all present. Protocol rules are based on the principles of civility. Standards are important for protocols so that people can create networking systems and products that interoperate. 1. Dial-up modem over telephone line: home; 2. DSL over telephone line: home or small office; 3. Cable to HFC: home; 4. 100 Mbps switched Ethernet: enterprise; 5. Wifi (802.11): home and enterprise: 6. 3G and 4G: wide-area wireless. HFC bandwidth is shared among the users. On the downstream channel, all packets emanate from a single source, namely, the head end. Thus, there are no collisions in the downstream channel. In most American cities, the current possibilities include: dial-up; DSL; cable modem; fiber-to-the-home. 7. Ethernet LANs have transmission rates of 10 Mbps, 100 Mbps, 1 Gbps and 10 Gbps. 8. Today, Ethernet most commonly runs over twisted-pair copper wire. It also can run over fibers optic links. 9. Dial up modems: up to 56 Kbps, bandwidth is dedicated; ADSL: up to 24 Mbps downstream and 2.5 Mbps upstream, bandwidth is dedicated; HFC, rates up to 42.8 Mbps and upstream rates of up to 30.7 Mbps, bandwidth is shared. FTTH: 2-10Mbps upload; 10-20 Mbps download; bandwidth is not shared. 10. There are two popular wireless Internet access technologies today: Wifi (802.11) In a wireless LAN, wireless users transmit/receive packets to/from an base station (i.e., wireless access point) within a radius of few tens of meters. The base station is typically connected to the wired Internet and thus serves to connect wireless users to the wired network. 3G and 4G wide-area wireless access networks. In these systems, packets are transmitted over the same wireless infrastructure used for cellular telephony, with the base station thus being managed by a telecommunications provider. This provides wireless access to users within a radius of tens of kilometers of the base station. 11. At time t0 the sending host begins to transmit. At time t1 = L/R1, the sending host completes transmission and the entire packet is received at the router (no propagation delay). Because the router has the entire packet at time t1, it can begin to transmit the packet to the receiving host at time t1. At time t2 = t1 + L/R2, the router completes transmission and the entire packet is received at the receiving host (again, no propagation delay). Thus, the end-to-end delay is L/R1 + L/R2. 12. A circuit-switched network can guarantee a certain amount of end-to-end bandwidth for the duration of a call. Most packet-switched networks today (including the Internet) cannot make any end-to-end guarantees for bandwidth. FDM requires sophisticated analog hardware to shift signal into appropriate frequency bands. 13. a) 2 users can be supported because each user requires half of the link bandwidth. b) Since each user requires 1Mbps when transmitting, if two or fewer users transmit simultaneously, a maximum of 2Mbps will be required. Since the available bandwidth of the shared link is 2Mbps, there will be no queuing delay before the link. Whereas, if three users transmit simultaneously, the bandwidth required will be 3Mbps which is more than the available bandwidth of the shared link. In this case, there will be queuing delay before the link. c) Probability that a given user is transmitting = 0.2 d) Probability that all three users are transmitting simultaneously = = (0.2)3 = 0.008. Since the queue grows when all the users are transmitting, the fraction of time during which the queue grows (which is equal to the probability that all three users are transmitting simultaneously) is 0.008. 14. If the two ISPs do not peer with each other, then when they send traffic to each other they have to send the traffic through a provider ISP (intermediary), to which they have to pay for carrying the traffic. By peering with each other directly, the two ISPs can reduce their payments to their provider ISPs. An Internet Exchange Points (IXP) (typically in a standalone building with its own switches) is a meeting point where multiple ISPs can connect and/or peer together. An ISP earns its money by charging each of the the ISPs that connect to the IXP a relatively small fee, which may depend on the amount of traffic sent to or received from the IXP. 15. Google's private network connects together all its data centers, big and small. Traffic between the Google data centers passes over its private network rather than over the public Internet. Many of these data centers are located in, or close to, lower tier ISPs. Therefore, when Google delivers content to a user, it often can bypass higher tier ISPs. What motivates content providers to create these networks? First, the content provider has more control over the user experience, since it has to use few intermediary ISPs. Second, it can save money by sending less traffic into provider networks. Third, if ISPs decide to charge more money to highly profitable content providers (in countries where net neutrality doesn't apply), the content providers can avoid these extra payments. 16. The delay components are processing delays, transmission delays, propagation delays, and queuing delays. All of these delays are fixed, except for the queuing delays, which are variable. 17. a) 1000 km, 1 Mbps, 100 bytes b) 100 km, 1 Mbps, 100 bytes 18. 10msec; d/s; no; no 19. a) 500 kbps b) 64 seconds c) 100kbps; 320 seconds 20. End system A breaks the large file into chunks. It adds header to each chunk, thereby generating multiple packets from the file. The header in each packet includes the IP address of the destination (end system B). The packet switch uses the destination IP address in the packet to determine the outgoing link. Asking which road to take is analogous to a packet asking which outgoing link it should be forwarded on, given the packet’s destination address. 21. The maximum emission rate is 500 packets/sec and the maximum transmission rate is 350 packets/sec. The corresponding traffic intensity is 500/350 =1.43 > 1. Loss will eventually occur for each experiment; but the time when loss first occurs will be different from one experiment to the next due to the randomness in the emission process. 22. Five generic tasks are error control, flow control, segmentation and reassembly, multiplexing, and connection setup. Yes, these tasks can be duplicated at different layers. For example, error control is often provided at more than one layer. 23. The five layers in the Internet protocol stack are – from top to bottom – the application layer, the transport layer, the network layer, the link layer, and the physical layer. The principal responsibilities are outlined in Section 1.5.1. 24. Application-layer message: data which an application wants to send and passed onto the transport layer; transport-layer segment: generated by the transport layer and encapsulates application-layer message with transport layer header; network-layer datagram: encapsulates transport-layer segment with a network-layer header; link-layer frame: encapsulates network-layer datagram with a link-layer header. 25. Routers process network, link and physical layers (layers 1 through 3). (This is a little bit of a white lie, as modern routers sometimes act as firewalls or caching components, and process Transport layer as well.) Link layer switches process link and physical layers (layers 1 through2). Hosts process all five layers. 26. a) Virus Requires some form of human interaction to spread. Classic example: E-mail viruses. b) Worms No user replication needed. Worm in infected host scans IP addresses and port numbers, looking for vulnerable processes to infect. 27. Creation of a botnet requires an attacker to find vulnerability in some application or system (e.g. exploiting the buffer overflow vulnerability that might exist in an application). After finding the vulnerability, the attacker needs to scan for hosts that are vulnerable. The target is basically to compromise a series of systems by exploiting that particular vulnerability. Any system that is part of the botnet can automatically scan its environment and propagate by exploiting the vulnerability. An important property of such botnets is that the originator of the botnet can remotely control and issue commands to all the nodes in the botnet. Hence, it becomes possible for the attacker to issue a command to all the nodes, that target a single node (for example, all nodes in the botnet might be commanded by the attacker to send a TCP SYN message to the target, which might result in a TCP SYN flood attack at the target). 28. Trudy can pretend to be Bob to Alice (and vice-versa) and partially or completely modify the message(s) being sent from Bob to Alice. For example, she can easily change the phrase “Alice, I owe you $1000” to “Alice, I owe you $10,000”. Furthermore, Trudy can even drop the packets that are being sent by Bob to Alice (and vise-versa), even if the packets from Bob to Alice are encrypted. Chapter 1 Problems Problem 1 There is no single right answer to this question. Many protocols would do the trick. Here's a simple answer below: Messages from ATM machine to Server Msg name purpose -------- ------- HELO Let server know that there is a card in the ATM machine ATM card transmits user ID to Server PASSWD User enters PIN, which is sent to server BALANCE User requests balance WITHDRAWL User asks to withdraw money BYE user all done Messages from Server to ATM machine (display) Msg name purpose -------- ------- PASSWD Ask user for PIN (password) OK last requested operation (PASSWD, WITHDRAWL) OK ERR last requested operation (PASSWD, WITHDRAWL) in ERROR AMOUNT sent in response to BALANCE request BYE user done, display welcome screen at ATM Correct operation: client server HELO (userid) --------------> (check if valid userid) <------------- PASSWD PASSWD --------------> (check password) <------------- AMOUNT WITHDRAWL --------------> check if enough $ to cover withdrawl (check if valid userid) <------------- PASSWD PASSWD --------------> (check password) <------------- AMOUNT WITHDRAWL --------------> check if enough $ to cover withdrawl <------------- BYE Problem 2 At time N*(L/R) the first packet has reached the destination, the second packet is stored in the last router, the third packet is stored in the next-to-last router, etc. At time N*(L/R) + L/R, the second packet has reached the destination, the third packet is stored in the last router, etc. Continuing with this logic, we see that at time N*(L/R) + (P-1)*(L/R) = (N+P-1)*(L/R) all packets have reached the destination. Problem 3 a) A circuit-switched network would be well suited to the application, because the application involves long sessions with predictable smooth bandwidth requirements. Since the transmission rate is known and not bursty, bandwidth can be reserved for each application session without significant waste. In addition, the overhead costs of setting up and tearing down connections are amortized over the lengthy duration of a typical application session. b) In the worst case, all the applications simultaneously transmit over one or more network links. However, since each link has sufficient bandwidth to handle the sum of all of the applications' data rates, no congestion (very little queuing) will occur. Given such generous link capacities, the network does not need congestion control mechanisms. Problem 4 Between the switch in the upper left and the switch in the upper right we can have 4 connections. Similarly we can have four connections between each of the 3 other pairs of adjacent switches. Thus, this network can support up to 16 connections. We can 4 connections passing through the switch in the upper-right-hand corner and another 4 connections passing through the switch in the lower-left-hand corner, giving a total of 8 connections. Yes. For the connections between A and C, we route two connections through B and two connections through D. For the connections between B and D, we route two connections through A and two connections through C. In this manner, there are at most 4 connections passing through any link. Problem 5 Tollbooths are 75 km apart, and the cars propagate at 100km/hr. A tollbooth services a car at a rate of one car every 12 seconds. a) There are ten cars. It takes 120 seconds, or 2 minutes, for the first tollbooth to service the 10 cars. Each of these cars has a propagation delay of 45 minutes (travel 75 km) before arriving at the second tollbooth. Thus, all the cars are lined up before the second tollbooth after 47 minutes. The whole process repeats itself for traveling between the second and third tollbooths. It also takes 2 minutes for the third tollbooth to service the 10 cars. Thus the total delay is 96 minutes. b) Delay between tollbooths is 8*12 seconds plus 45 minutes, i.e., 46 minutes and 36 seconds. The total delay is twice this amount plus 8*12 seconds, i.e., 94 minutes and 48 seconds. Problem 6 a) seconds. b) seconds. c) seconds. d) The bit is just leaving Host A. e) The first bit is in the link and has not reached Host B. f) The first bit has reached Host B. g) Want km. Problem 7 Consider the first bit in a packet. Before this bit can be transmitted, all of the bits in the packet must be generated. This requires sec=7msec. The time required to transmit the packet is sec= sec. Propagation delay = 10 msec. The delay until decoding is 7msec + sec + 10msec = 17.224msec A similar analysis shows that all bits experience a delay of 17.224 msec. Problem 8 a) 20 users can be supported. b) . c) . d) . We use the central limit theorem to approximate this probability. Let be independent random variables such that . “21 or more users” when is a standard normal r.v. Thus “21 or more users” . Problem 9 10,000 Problem 10 The first end system requires L/R1 to transmit the packet onto the first link; the packet propagates over the first link in d1/s1; the packet switch adds a processing delay of dproc; after receiving the entire packet, the packet switch connecting the first and the second link requires L/R2 to transmit the packet onto the second link; the packet propagates over the second link in d2/s2. Similarly, we can find the delay caused by the second switch and the third link: L/R3, dproc, and d3/s3. Adding these five delays gives dend-end = L/R1 + L/R2 + L/R3 + d1/s1 + d2/s2 + d3/s3+ dproc+ dproc To answer the second question, we simply plug the values into the equation to get 6 + 6 + 6 + 20+16 + 4 + 3 + 3 = 64 msec. Problem 11 Because bits are immediately transmitted, the packet switch does not introduce any delay; in particular, it does not introduce a transmission delay. Thus, dend-end = L/R + d1/s1 + d2/s2+ d3/s3 For the values in Problem 10, we get 6 + 20 + 16 + 4 = 46 msec. Problem 12 The arriving packet must first wait for the link to transmit 4.5 *1,500 bytes = 6,750 bytes or 54,000 bits. Since these bits are transmitted at 2 Mbps, the queuing delay is 27 msec. Generally, the queuing delay is (nL + (L - x))/R. Problem 13 The queuing delay is 0 for the first transmitted packet, L/R for the second transmitted packet, and generally, (n-1)L/R for the nth transmitted packet. Thus, the average delay for the N packets is: (L/R + 2L/R + ....... + (N-1)L/R)/N = L/(RN) * (1 + 2 + ..... + (N-1)) = L/(RN) * N(N-1)/2 = LN(N-1)/(2RN) = (N-1)L/(2R) Note that here we used the well-known fact: 1 + 2 + ....... + N = N(N+1)/2 It takes seconds to transmit the packets. Thus, the buffer is empty when a each batch of packets arrive. Thus, the average delay of a packet across all batches is the average delay within one batch, i.e., (N-1)L/2R. Problem 14 The transmission delay is . The total delay is Let . Total delay = For x=0, the total delay =0; as we increase x, total delay increases, approaching infinity as x approaches 1/a. Problem 15 Total delay . Problem 16 The total number of packets in the system includes those in the buffer and the packet that is being transmitted. So, N=10+1. Because , so (10+1)=a*(queuing delay + transmission delay). That is, 11=a*(0.01+1/100)=a*(0.01+0.01). Thus, a=550 packets/sec. Problem 17 There are nodes (the source host and the routers). Let denote the processing delay at the th node. Let be the transmission rate of the th link and let . Let be the propagation delay across the th link. Then . Let denote the average queuing delay at node . Then . Problem 18 On linux you can use the command traceroute www.targethost.com and in the Windows command prompt you can use tracert www.targethost.com In either case, you will get three delay measurements. For those three measurements you can calculate the mean and standard deviation. Repeat the experiment at different times of the day and comment on any changes. Here is an example solution: Traceroutes between San Diego Super Computer Center and www.poly.edu The average (mean) of the round-trip delays at each of the three hours is 71.18 ms, 71.38 ms and 71.55 ms, respectively. The standard deviations are 0.075 ms, 0.21 ms, 0.05 ms, respectively. In this example, the traceroutes have 12 routers in the path at each of the three hours. No, the paths didn’t change during any of the hours. Traceroute packets passed through four ISP networks from source to destination. Yes, in this experiment the largest delays occurred at peering interfaces between adjacent ISPs. Traceroutes from www.stella-net.net (France) to www.poly.edu (USA). The average round-trip delays at each of the three hours are 87.09 ms, 86.35 ms and 86.48 ms, respectively. The standard deviations are 0.53 ms, 0.18 ms, 0.23 ms, respectively. In this example, there are 11 routers in the path at each of the three hours. No, the paths didn’t change during any of the hours. Traceroute packets passed three ISP networks from source to destination. Yes, in this experiment the largest delays occurred at peering interfaces between adjacent ISPs. Problem 19 An example solution: Traceroutes from two different cities in France to New York City in United States In these traceroutes from two different cities in France to the same destination host in United States, seven links are in common including the transatlantic link. In this example of traceroutes from one city in France and from another city in Germany to the same host in United States, three links are in common including the transatlantic link. Traceroutes to two different cities in China from same host in United States Five links are common in the two traceroutes. The two traceroutes diverge before reaching China Problem 20 Throughput = min{Rs, Rc, R/M} Problem 21 If only use one path, the max throughput is given by: . If use all paths, the max throughput is given by . Problem 22 Probability of successfully receiving a packet is: ps= (1-p)N. The number of transmissions needed to be performed until the packet is successfully received by the client is a geometric random variable with success probability ps. Thus, the average number of transmissions needed is given by: 1/ps . Then, the average number of re-transmissions needed is given by: 1/ps -1. Problem 23 Let’s call the first packet A and call the second packet B. If the bottleneck link is the first link, then packet B is queued at the first link waiting for the transmission of packet A. So the packet inter-arrival time at the destination is simply L/Rs. If the second link is the bottleneck link and both packets are sent back to back, it must be true that the second packet arrives at the input queue of the second link before the second link finishes the transmission of the first packet. That is, L/Rs + L/Rs + dprop = L/Rs + dprop + L/Rc Thus, the minimum value of T is L/Rc  L/Rs . Problem 24 40 terabytes = 40 * 1012 * 8 bits. So, if using the dedicated link, it will take 40 * 1012 * 8 / (100 *106 ) =3200000 seconds = 37 days. But with FedEx overnight delivery, you can guarantee the data arrives in one day, and it should cost less than $100. Problem 25 160,000 bits 160,000 bits The bandwidth-delay product of a link is the maximum number of bits that can be in the link. the width of a bit = length of link / bandwidth-delay product, so 1 bit is 125 meters long, which is longer than a football field s/R Problem 26 s/R=20000km, then R=s/20000km= 2.5*108/(2*107)= 12.5 bps Problem 27 80,000,000 bits 800,000 bits, this is because that the maximum number of bits that will be in the link at any given time = min(bandwidth delay product, packet size) = 800,000 bits. .25 meters Problem 28 ttrans + tprop = 400 msec + 80 msec = 480 msec. 20 * (ttrans + 2 tprop) = 20*(20 msec + 80 msec) = 2 sec. Breaking up a file takes longer to transmit because each data packet and its corresponding acknowledgement packet add their own propagation delays. Problem 29 Recall geostationary satellite is 36,000 kilometers away from earth surface. 150 msec 1,500,000 bits 600,000,000 bits Problem 30 Let’s suppose the passenger and his/her bags correspond to the data unit arriving to the top of the protocol stack. When the passenger checks in, his/her bags are checked, and a tag is attached to the bags and ticket. This is additional information added in the Baggage layer if Figure 1.20 that allows the Baggage layer to implement the service or separating the passengers and baggage on the sending side, and then reuniting them (hopefully!) on the destination side. When a passenger then passes through security and additional stamp is often added to his/her ticket, indicating that the passenger has passed through a security check. This information is used to ensure (e.g., by later checks for the security information) secure transfer of people. Problem 31 Time to send message from source host to first packet switch = With store-and-forward switching, the total time to move message from source host to destination host = Time to send 1st packet from source host to first packet switch = . . Time at which 2nd packet is received at the first switch = time at which 1st packet is received at the second switch = Time at which 1st packet is received at the destination host = . After this, every 5msec one packet will be received; thus time at which last (800th) packet is received = . It can be seen that delay in using message segmentation is significantly less (almost 1/3rd). Without message segmentation, if bit errors are not tolerated, if there is a single bit error, the whole message has to be retransmitted (rather than a single packet). Without message segmentation, huge packets (containing HD videos, for example) are sent into the network. Routers have to accommodate these huge packets. Smaller packets have to queue behind enormous packets and suffer unfair delays. Packets have to be put in sequence at the destination. Message segmentation results in many smaller packets. Since header size is usually the same for all packets regardless of their size, with message segmentation the total amount of header bytes is more. Problem 32 Yes, the delays in the applet correspond to the delays in the Problem 31.The propagation delays affect the overall end-to-end delays both for packet switching and message switching equally. Problem 33 There are F/S packets. Each packet is S=80 bits. Time at which the last packet is received at the first router is sec. At this time, the first F/S-2 packets are at the destination, and the F/S-1 packet is at the second router. The last packet must then be transmitted by the first router and the second router, with each transmission taking sec. Thus delay in sending the whole file is To calculate the value of S which leads to the minimum delay, Problem 34 The circuit-switched telephone networks and the Internet are connected together at "gateways". When a Skype user (connected to the Internet) calls an ordinary telephone, a circuit is established between a gateway and the telephone user over the circuit switched network. The skype user's voice is sent in packets over the Internet to the gateway. At the gateway, the voice signal is reconstructed and then sent over the circuit. In the other direction, the voice signal is sent over the circuit switched network to the gateway. The gateway packetizes the voice signal and sends the voice packets to the Skype user.   Chapter 2 Review Questions The Web: HTTP; file transfer: FTP; remote login: Telnet; e-mail: SMTP; BitTorrent file sharing: BitTorrent protocol Network architecture refers to the organization of the communication process into layers (e.g., the five-layer Internet architecture). Application architecture, on the other hand, is designed by an application developer and dictates the broad structure of the application (e.g., client-server or P2P). The process which initiates the communication is the client; the process that waits to be contacted is the server. No. In a P2P file-sharing application, the peer that is receiving a file is typically the client and the peer that is sending the file is typically the server. The IP address of the destination host and the port number of the socket in the destination process. You would use UDP. With UDP, the transaction can be completed in one roundtrip time (RTT) - the client sends the transaction request into a UDP socket, and the server sends the reply back to the client's UDP socket. With TCP, a minimum of two RTTs are needed - one to set-up the TCP connection, and another for the client to send the request, and for the server to send back the reply. One such example is remote word processing, for example, with Google docs. However, because Google docs runs over the Internet (using TCP), timing guarantees are not provided. a) Reliable data transfer TCP provides a reliable byte-stream between client and server but UDP does not. b) A guarantee that a certain value for throughput will be maintained Neither c) A guarantee that data will be delivered within a specified amount of time Neither d) Confidentiality (via encryption) Neither SSL operates at the application layer. The SSL socket takes unencrypted data from the application layer, encrypts it and then passes it to the TCP socket. If the application developer wants TCP to be enhanced with SSL, she has to include the SSL code in the application. A protocol uses handshaking if the two communicating entities first exchange control packets before sending data to each other. SMTP uses handshaking at the application layer whereas HTTP does not. The applications associated with those protocols require that all application data be received in the correct order and without gaps. TCP provides this service whereas UDP does not. When the user first visits the site, the server creates a unique identification number, creates an entry in its back-end database, and returns this identification number as a cookie number. This cookie number is stored on the user’s host and is managed by the browser. During each subsequent visit (and purchase), the browser sends the cookie number back to the site. Thus the site knows when this user (more precisely, this browser) is visiting the site. Web caching can bring the desired content “closer” to the user, possibly to the same LAN to which the user’s host is connected. Web caching can reduce the delay for all objects, even objects that are not cached, since caching reduces the traffic on links. Telnet is not available in Windows 7 by default. to make it available, go to Control Panel, Programs and Features, Turn Windows Features On or Off, Check Telnet client. To start Telnet, in Windows command prompt, issue the following command > telnet webserverver 80 where "webserver" is some webserver. After issuing the command, you have established a TCP connection between your client telnet program and the web server. Then type in an HTTP GET message. An example is given below: Since the index.html page in this web server was not modified since Fri, 18 May 2007 09:23:34 GMT, and the above commands were issued on Sat, 19 May 2007, the server returned "304 Not Modified". Note that the first 4 lines are the GET message and header lines inputed by the user, and the next 4 lines (starting from HTTP/1.1 304 Not Modified) is the response from the web server. FTP uses two parallel TCP connections, one connection for sending control information (such as a request to transfer a file) and another connection for actually transferring the file. Because the control information is not sent over the same connection that the file is sent over, FTP sends control information out of band. The message is first sent from Alice’s host to her mail server over HTTP. Alice’s mail server then sends the message to Bob’s mail server over SMTP. Bob then transfers the message from his mail server to his host over POP3. 17. Received: from 65.54.246.203 (EHLO bay0-omc3-s3.bay0.hotmail.com) (65.54.246.203) by mta419.mail.mud.yahoo.com with SMTP; Sat, 19 May 2007 16:53:51 -0700 Received: from hotmail.com ([65.55.135.106]) by bay0-omc3-s3.bay0.hotmail.com with Microsoft SMTPSVC(6.0.3790.2668); Sat, 19 May 2007 16:52:42 -0700 Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC; Sat, 19 May 2007 16:52:41 -0700 Message-ID: Received: from 65.55.135.123 by by130fd.bay130.hotmail.msn.com with HTTP; Sat, 19 May 2007 23:52:36 GMT From: "prithula dhungel" To: [email protected] Bcc: Subject: Test mail Date: Sat, 19 May 2007 23:52:36 +0000 Mime-Version: 1.0 Content-Type: Text/html; format=flowed Return-Path: [email protected] Figure: A sample mail message header Received: This header field indicates the sequence in which the SMTP servers send and receive the mail message including the respective timestamps. In this example there are 4 “Received:” header lines. This means the mail message passed through 5 different SMTP servers before being delivered to the receiver’s mail box. The last (forth) “Received:” header indicates the mail message flow from the SMTP server of the sender to the second SMTP server in the chain of servers. The sender’s SMTP server is at address 65.55.135.123 and the second SMTP server in the chain is by130fd.bay130.hotmail.msn.com. The third “Received:” header indicates the mail message flow from the second SMTP server in the chain to the third server, and so on. Finally, the first “Received:” header indicates the flow of the mail messages from the forth SMTP server to the last SMTP server (i.e. the receiver’s mail server) in the chain. Message-id: The message has been given this number [email protected] (by bay0-omc3-s3.bay0.hotmail.com. Message-id is a unique string assigned by the mail system when the message is first created. From: This indicates the email address of the sender of the mail. In the given example, the sender is “[email protected]” To: This field indicates the email address of the receiver of the mail. In the example, the receiver is “[email protected]” Subject: This gives the subject of the mail (if any specified by the sender). In the example, the subject specified by the sender is “Test mail” Date: The date and time when the mail was sent by the sender. In the example, the sender sent the mail on 19th May 2007, at time 23:52:36 GMT. Mime-version: MIME version used for the mail. In the example, it is 1.0. Content-type: The type of content in the body of the mail message. In the example, it is “text/html”. Return-Path: This specifies the email address to which the mail will be sent if the receiver of this mail wants to reply to the sender. This is also used by the sender’s mail server for bouncing back undeliverable mail messages of mailer-daemon error messages. In the example, the return path is “[email protected]”. With download and delete, after a user retrieves its messages from a POP server, the messages are deleted. This poses a problem for the nomadic user, who may want to access the messages from many different machines (office PC, home PC, etc.). In the download and keep configuration, messages are not deleted after the user retrieves the messages. This can also be inconvenient, as each time the user retrieves the stored messages from a new machine, all of non-deleted messages will be transferred to the new machine (including very old messages). Yes an organization’s mail server and Web server can have the same alias for a host name. The MX record is used to map the mail server’s host name to its IP address. You should be able to see the sender's IP address for a user with an .edu email address. But you will not be able to see the sender's IP address if the user uses a gmail account. It is not necessary that Bob will also provide chunks to Alice. Alice has to be in the top 4 neighbors of Bob for Bob to send out chunks to her; this might not occur even if Alice provides chunks to Bob throughout a 30-second interval. Recall that in BitTorrent, a peer picks a random peer and optimistically unchokes the peer for a short period of time. Therefore, Alice will eventually be optimistically unchoked by one of her neighbors, during which time she will receive chunks from that neighbor. The overlay network in a P2P file sharing system consists of the nodes participating in the file sharing system and the logical links between the nodes. There is a logical link (an “edge” in graph theory terms) from node A to node B if there is a semi-permanent TCP connection between A and B. An overlay network does not include routers. Mesh DHT: The advantage is in order to a route a message to the peer (with ID) that is closest to the key, only one hop is required; the disadvantage is that each peer must track all other peers in the DHT. Circular DHT: the advantage is that each peer needs to track only a few other peers; the disadvantage is that O(N) hops are needed to route a message to the peer that is closest to the key. 25. File Distribution Instant Messaging Video Streaming Distributed Computing With the UDP server, there is no welcoming socket, and all data from different clients enters the server through this one socket. With the TCP server, there is a welcoming socket, and each time a client initiates a connection to the server, a new socket is created. Thus, to support n simultaneous connections, the server would need n+1 sockets. For the TCP application, as soon as the client is executed, it attempts to initiate a TCP connection with the server. If the TCP server is not running, then the client will fail to make a connection. For the UDP application, the client does not initiate connections (or attempt to communicate with the UDP server) immediately upon execution Chapter 2 Problems Problem 1 a) F b) T c) F d) F e) F Problem 2 Access control commands: USER, PASS, ACT, CWD, CDUP, SMNT, REIN, QUIT. Transfer parameter commands: PORT, PASV, TYPE STRU, MODE. Service commands: RETR, STOR, STOU, APPE, ALLO, REST, RNFR, RNTO, ABOR, DELE, RMD, MRD, PWD, LIST, NLST, SITE, SYST, STAT, HELP, NOOP. Problem 3 Application layer protocols: DNS and HTTP Transport layer protocols: UDP for DNS; TCP for HTTP Problem 4 The document request was http://gaia.cs.umass.edu/cs453/index.html. The Host : field indicates the server's name and /cs453/index.html indicates the file name. The browser is running HTTP version 1.1, as indicated just before the first pair. The browser is requesting a persistent connection, as indicated by the Connection: keep-alive. This is a trick question. This information is not contained in an HTTP message anywhere. So there is no way to tell this from looking at the exchange of HTTP messages alone. One would need information from the IP datagrams (that carried the TCP segment that carried the HTTP GET request) to answer this question. Mozilla/5.0. The browser type information is needed by the server to send different versions of the same object to different types of browsers. Problem 5 The status code of 200 and the phrase OK indicate that the server was able to locate the document successfully. The reply was provided on Tuesday, 07 Mar 2008 12:39:45 Greenwich Mean Time. The document index.html was last modified on Saturday 10 Dec 2005 18:27:46 GMT. There are 3874 bytes in the document being returned. The first five bytes of the returned document are : <!doc. The server agreed to a persistent connection, as indicated by the Connection: Keep-Alive field Problem 6 Persistent connections are discussed in section 8 of RFC 2616 (the real goal of this question was to get you to retrieve and read an RFC). Sections 8.1.2 and 8.1.2.1 of the RFC indicate that either the client or the server can indicate to the other that it is going to close the persistent connection. It does so by including the connection-token "close" in the Connection-header field of the http request/reply. HTTP does not provide any encryption services. (From RFC 2616) “Clients that use persistent connections should limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.” Yes. (From RFC 2616) “A client might have started to send a new request at the same time that the server has decided to close the "idle" connection. From the server's point of view, the connection is being closed while it was idle, but from the client's point of view, a request is in progress.” Problem 7 The total amount of time to get the IP address is . Once the IP address is known, elapses to set up the TCP connection and another elapses to request and receive the small object. The total response time is Problem 8 . . Problem 9 The time to transmit an object of size L over a link or rate R is L/R. The average time is the average size of the object divided by R:  = (850,000 bits)/(15,000,000 bits/sec) = .0567 sec The traffic intensity on the link is given by =(16 requests/sec)(.0567 sec/request) = 0.907. Thus, the average access delay is (.0567 sec)/(1 - .907)  .6 seconds. The total average response time is therefore .6 sec + 3 sec = 3.6 sec. The traffic intensity on the access link is reduced by 60% since the 60% of the requests are satisfied within the institutional network. Thus the average access delay is (.0567 sec)/[1 – (.4)(.907)] = .089 seconds. The response time is approximately zero if the request is satisfied by the cache (which happens with probability .6); the average response time is .089 sec + 3 sec = 3.089 sec for cache misses (which happens 40% of the time). So the average response time is (.6)(0 sec) + (.4)(3.089 sec) = 1.24 seconds. Thus the average response time is reduced from 3.6 sec to 1.24 sec. Problem 10 Note that each downloaded object can be completely put into one data packet. Let Tp denote the one-way propagation delay between the client and the server. First consider parallel downloads using non-persistent connections. Parallel downloads would allow 10 connections to share the 150 bits/sec bandwidth, giving each just 15 bits/sec. Thus, the total time needed to receive all objects is given by: (200/150+Tp + 200/150 +Tp + 200/150+Tp + 100,000/150+ Tp ) + (200/(150/10)+Tp + 200/(150/10) +Tp + 200/(150/10)+Tp + 100,000/(150/10)+ Tp ) = 7377 + 8*Tp (seconds) Now consider a persistent HTTP connection. The total time needed is given by: (200/150+Tp + 200/150 +Tp + 200/150+Tp + 100,000/150+ Tp ) + 10*(200/150+Tp + 100,000/150+ Tp ) =7351 + 24*Tp (seconds) Assuming the speed of light is 300*106 m/sec, then Tp=10/(300*106)=0.03 microsec. Tp is therefore negligible compared with transmission delay. Thus, we see that persistent HTTP is not significantly faster (less than 1 percent) than the non-persistent case with parallel download. Problem 11 Yes, because Bob has more connections, he can get a larger share of the link bandwidth. Yes, Bob still needs to perform parallel downloads; otherwise he will get less bandwidth than the other four users. Problem 12 Server.py from socket import * serverPort=12000 serverSocket=socket(AF_INET,SOCK_STREAM) serverSocket.bind(('',serverPort)) serverSocket.listen(1) connectionSocket, addr = serverSocket.accept() while 1: sentence = connectionSocket.recv(1024) print 'From Server:', sentence, '\n' serverSocket.close() Problem 13 The MAIL FROM: in SMTP is a message from the SMTP client that identifies the sender of the mail message to the SMTP server. The From: on the mail message itself is NOT an SMTP message, but rather is just a line in the body of the mail message. Problem 14 SMTP uses a line containing only a period to mark the end of a message body. HTTP uses “Content-Length header field” to indicate the length of a message body. No, HTTP cannot use the method used by SMTP, because HTTP message could be binary data, whereas in SMTP, the message body must be in 7-bit ASCII format. Problem 15 MTA stands for Mail Transfer Agent. A host sends the message to an MTA. The message then follows a sequence of MTAs to reach the receiver’s mail reader. We see that this spam message follows a chain of MTAs. An honest MTA should report where it receives the message. Notice that in this message, “asusus-4b96 ([58.88.21.177])” does not report from where it received the email. Since we assume only the originator is dishonest, so “asusus-4b96 ([58.88.21.177])” must be the originator. Problem 16 UIDL abbreviates “unique-ID listing”. When a POP3 client issues the UIDL command, the server responds with the unique message ID for all of the messages present in the user's mailbox. This command is useful for “download and keep”. By maintaining a file that lists the messages retrieved during earlier sessions, the client can use the UIDL command to determine which messages on the server have already been seen. Problem 17 a) C: dele 1 C: retr 2 S: (blah blah … S: ………..blah) S: . C: dele 2 C: quit S: +OK POP3 server signing off b) C: retr 2 S: blah blah … S: ………..blah S: . C: quit S: +OK POP3 server signing off C: list S: 1 498 S: 2 912 S: . C: retr 1 S: blah ….. S: ….blah S: . C: retr 2 S: blah blah … S: ………..blah S: . C: quit S: +OK POP3 server signing off Problem 18 For a given input of domain name (such as ccn.com), IP address or network administrator name, the whois database can be used to locate the corresponding registrar, whois server, DNS server, and so on. NS4.YAHOO.COM from www.register.com; NS1.MSFT.NET from ww.register.com Local Domain: www.mindspring.com Web servers : www.mindspring.com 207.69.189.21, 207.69.189.22, 207.69.189.23, 207.69.189.24, 207.69.189.25, 207.69.189.26, 207.69.189.27, 207.69.189.28 Mail Servers : mx1.mindspring.com (207.69.189.217) mx2.mindspring.com (207.69.189.218) mx3.mindspring.com (207.69.189.219) mx4.mindspring.com (207.69.189.220) Name Servers: itchy.earthlink.net (207.69.188.196) scratchy.earthlink.net (207.69.188.197) www.yahoo.com Web Servers: www.yahoo.com (216.109.112.135, 66.94.234.13) Mail Servers: a.mx.mail.yahoo.com (209.191.118.103) b.mx.mail.yahoo.com (66.196.97.250) c.mx.mail.yahoo.com (68.142.237.182, 216.39.53.3) d.mx.mail.yahoo.com (216.39.53.2) e.mx.mail.yahoo.com (216.39.53.1) f.mx.mail.yahoo.com (209.191.88.247, 68.142.202.247) g.mx.mail.yahoo.com (209.191.88.239, 206.190.53.191) Name Servers: ns1.yahoo.com (66.218.71.63) ns2.yahoo.com (68.142.255.16) ns3.yahoo.com (217.12.4.104) ns4.yahoo.com (68.142.196.63) ns5.yahoo.com (216.109.116.17) ns8.yahoo.com (202.165.104.22) ns9.yahoo.com (202.160.176.146) www.hotmail.com Web Servers: www.hotmail.com (64.4.33.7, 64.4.32.7) Mail Servers: mx1.hotmail.com (65.54.245.8, 65.54.244.8, 65.54.244.136) mx2.hotmail.com (65.54.244.40, 65.54.244.168, 65.54.245.40) mx3.hotmail.com (65.54.244.72, 65.54.244.200, 65.54.245.72) mx4.hotmail.com (65.54.244.232, 65.54.245.104, 65.54.244.104) Name Servers: ns1.msft.net (207.68.160.190) ns2.msft.net (65.54.240.126) ns3.msft.net (213.199.161.77) ns4.msft.net (207.46.66.126) ns5.msft.net (65.55.238.126) d) The yahoo web server has multiple IP addresses www.yahoo.com (216.109.112.135, 66.94.234.13) e) The address range for Polytechnic University: 128.238.0.0 – 128.238.255.255 f) An attacker can use the whois database and nslookup tool to determine the IP address ranges, DNS server addresses, etc., for the target institution. By analyzing the source address of attack packets, the victim can use whois to obtain information about domain from which the attack is coming and possibly inform the administrators of the origin domain. Problem 19 The following delegation chain is used for gaia.cs.umass.edu a.root-servers.net E.GTLD-SERVERS.NET ns1.umass.edu(authoritative) First command: dig +norecurse @a.root-servers.net any gaia.cs.umass.edu ;; AUTHORITY SECTION: edu. 172800 IN NS E.GTLD-SERVERS.NET. edu. 172800 IN NS A.GTLD-SERVERS.NET. edu. 172800 IN NS G3.NSTLD.COM. edu. 172800 IN NS D.GTLD-SERVERS.NET. edu. 172800 IN NS H3.NSTLD.COM. edu. 172800 IN NS L3.NSTLD.COM. edu. 172800 IN NS M3.NSTLD.COM. edu. 172800 IN NS C.GTLD-SERVERS.NET. Among all returned edu DNS servers, we send a query to the first one. dig +norecurse @E.GTLD-SERVERS.NET any gaia.cs.umass.edu umass.edu. 172800 IN NS ns1.umass.edu. umass.edu. 172800 IN NS ns2.umass.edu. umass.edu. 172800 IN NS ns3.umass.edu. Among all three returned authoritative DNS servers, we send a query to the first one. dig +norecurse @ns1.umass.edu any gaia.cs.umass.edu gaia.cs.umass.edu. 21600 IN A 128.119.245.12 The answer for google.com could be: a.root-servers.net E.GTLD-SERVERS.NET ns1.google.com(authoritative) Problem 20 We can periodically take a snapshot of the DNS caches in the local DNS servers. The Web server that appears most frequently in the DNS caches is the most popular server. This is because if more users are interested in a Web server, then DNS requests for that server are more frequently sent by users. Thus, that Web server will appear in the DNS caches more frequently. For a complete measurement study, see: Craig E. Wills, Mikhail Mikhailov, Hao Shang “Inferring Relative Popularity of Internet Applications by Actively Querying DNS Caches”, in IMC'03, October 27­29, 2003, Miami Beach, Florida, USA Problem 21 Yes, we can use dig to query that Web site in the local DNS server. For example, “dig cnn.com” will return the query time for finding cnn.com. If cnn.com was just accessed a couple of seconds ago, an entry for cnn.com is cached in the local DNS cache, so the query time is 0 msec. Otherwise, the query time is large. Problem 22 For calculating the minimum distribution time for client-server distribution, we use the following formula: Dcs = max {NF/us, F/dmin} Similarly, for calculating the minimum distribution time for P2P distribution, we use the following formula: Where, F = 15 Gbits = 15 * 1024 Mbits us = 30 Mbps dmin = di = 2 Mbps Note, 300Kbps = 300/1024 Mbps. Client Server N 10 100 1000 u 300 Kbps 7680 51200 512000 700 Kbps 7680 51200 512000 2 Mbps 7680 51200 512000 Peer to Peer N 10 100 1000 u 300 Kbps 7680 25904 47559 700 Kbps 7680 15616 21525 2 Mbps 7680 7680 7680 Problem 23 Consider a distribution scheme in which the server sends the file to each client, in parallel, at a rate of a rate of us/N. Note that this rate is less than each of the client’s download rate, since by assumption us/N ≤ dmin. Thus each client can also receive at rate us/N. Since each client receives at rate us/N, the time for each client to receive the entire file is F/( us/N) = NF/ us. Since all the clients receive the file in NF/ us, the overall distribution time is also NF/ us. Consider a distribution scheme in which the server sends the file to each client, in parallel, at a rate of dmin. Note that the aggregate rate, N dmin, is less than the server’s link rate us, since by assumption us/N ≥ dmin. Since each client receives at rate dmin, the time for each client to receive the entire file is F/ dmin. Since all the clients receive the file in this time, the overall distribution time is also F/ dmin. From Section 2.6 we know that DCS ≥ max {NF/us, F/dmin} (Equation 1) Suppose that us/N ≤ dmin. Then from Equation 1 we have DCS ≥ NF/us . But from (a) we have DCS ≤ NF/us . Combining these two gives: DCS = NF/us when us/N ≤ dmin. (Equation 2) We can similarly show that: DCS =F/dmin when us/N ≥ dmin (Equation 3). Combining Equation 2 and Equation 3 gives the desired result. Problem 24 Define u = u1 + u2 + ….. + uN. By assumption us <= (us + u)/N Equation 1 Divide the file into N parts, with the ith part having size (ui/u)F. The server transmits the ith part to peer i at rate ri = (ui/u)us. Note that r1 + r2 + ….. + rN = us, so that the aggregate server rate does not exceed the link rate of the server. Also have each peer i forward the bits it receives to each of the N-1 peers at rate ri. The aggregate forwarding rate by peer i is (N-1)ri. We have (N-1)ri = (N-1)(usui)/u = (us + u)/N Equation 2 Let ri = ui/(N-1) and rN+1 = (us – u/(N-1))/N In this distribution scheme, the file is broken into N+1 parts. The server sends bits from the ith part to the ith peer (i = 1, …., N) at rate ri. Each peer i forwards the bits arriving at rate ri to each of the other N-1 peers. Additionally, the server sends bits from the (N+1) st part at rate rN+1 to each of the N peers. The peers do not forward the bits from the (N+1)st part. The aggregate send rate of the server is r1+ …. + rN + N rN+1 = u/(N-1) + us – u/(N-1) = us Thus, the server’s send rate does not exceed its link rate. The aggregate send rate of peer i is (N-1)ri = ui Thus, each peer’s send rate does not exceed its link rate. In this distribution scheme, peer i receives bits at an aggregate rate of Thus each peer receives the file in NF/(us+u). (For simplicity, we neglected to specify the size of the file part for i = 1, …., N+1. We now provide that here. Let Δ = (us+u)/N be the distribution time. For i = 1, …, N, the ith file part is Fi = ri Δ bits. The (N+1)st file part is FN+1 = rN+1 Δ bits. It is straightforward to show that F1+ ….. + FN+1 = F.) The solution to this part is similar to that of 17 (c). We know from section 2.6 that Combining this with a) and b) gives the desired result. Problem 25 There are N nodes in the overlay network. There are N(N-1)/2 edges. Problem 26 Yes. His first claim is possible, as long as there are enough peers staying in the swarm for a long enough time. Bob can always receive data through optimistic unchoking by other peers. His second claim is also true. He can run a client on each host, let each client “free-ride,” and combine the collected chunks from the different hosts into a single file. He can even write a small scheduling program to make the different hosts ask for different chunks of the file. This is actually a kind of Sybil attack in P2P networks. Problem 27 Peer 3 learns that peer 5 has just left the system, so Peer 3 asks its first successor (Peer 4) for the identifier of its immediate successor (peer 8). Peer 3 will then make peer 8 its second successor. Problem 28 Peer 6 would first send peer 15 a message, saying “what will be peer 6’s predecessor and successor?” This message gets forwarded through the DHT until it reaches peer 5, who realizes that it will be 6’s predecessor and that its current successor, peer 8, will become 6’s successor. Next, peer 5 sends this predecessor and successor information back to 6. Peer 6 can now join the DHT by making peer 8 its successor and by notifying peer 5 that it should change its immediate successor to 6. Problem 29 For each key, we first calculate the distances (using d(k,p)) between itself and all peers, and then store the key in the peer that is closest to the key (that is, with smallest distance value). Problem 30 Yes, randomly assigning keys to peers does not consider the underlying network at all, so it very likely causes mismatches. Such mismatches may degrade the search performance. For example, consider a logical path p1 (consisting of only two logical links): ABC, where A and B are neighboring peers, and B and C are neighboring peers. Suppose that there is another logical path p2 from A to C (consisting of 3 logical links): ADEC. It might be the case that A and B are very far away physically (and separated by many routers), and B and C are very far away physically (and separated by many routers). But it may be the case that A, D, E, and C are all very close physically (and all separated by few routers). In other words, a shorter logical path may correspond to a much longer physical path. Problem 31 If you run TCPClient first, then the client will attempt to make a TCP connection with a non-existent server process. A TCP connection will not be made. UDPClient doesn't establish a TCP connection with the server. Thus, everything should work fine if you first run UDPClient, then run UDPServer, and then type some input into the keyboard. If you use different port numbers, then the client will attempt to establish a TCP connection with the wrong process or a non-existent process. Errors will occur. Problem 32 In the original program, UDPClient does not specify a port number when it creates the socket. In this case, the code lets the underlying operating system choose a port number. With the additional line, when UDPClient is executed, a UDP socket is created with port number 5432 . UDPServer needs to know the client port number so that it can send packets back to the correct client socket. Glancing at UDPServer, we see that the client port number is not “hard-wired” into the server code; instead, UDPServer determines the client port number by unraveling the datagram it receives from the client. Thus UDP server will work with any client port number, including 5432. UDPServer therefore does not need to be modified. Before: Client socket = x (chosen by OS) Server socket = 9876 After: Client socket = 5432 Problem 33 Yes, you can configure many browsers to open multiple simultaneous connections to a Web site. The advantage is that you will you potentially download the file faster. The disadvantage is that you may be hogging the bandwidth, thereby significantly slowing down the downloads of other users who are sharing the same physical links. Problem 34 For an application such as remote login (telnet and ssh), a byte-stream oriented protocol is very natural since there is no notion of message boundaries in the application. When a user types a character, we simply drop the character into the TCP connection. In other applications, we may be sending a series of messages that have inherent boundaries between them. For example, when one SMTP mail server sends another SMTP mail server several email messages back to back. Since TCP does not have a mechanism to indicate the boundaries, the application must add the indications itself, so that receiving side of the application can distinguish one message from the next. If each message were instead put into a distinct UDP segment, the receiving end would be able to distinguish the various messages without any indications added by the sending side of the application. Problem 35 To create a web server, we need to run web server software on a host. Many vendors sell web server software. However, the most popular web server software today is Apache, which is open source and free. Over the years it has been highly optimized by the open-source community. Problem 36 The key is the infohash, the value is an IP address that currently has the file designated by the infohash.   Chapter 3 Review Questions Call this protocol Simple Transport Protocol (STP). At the sender side, STP accepts from the sending process a chunk of data not exceeding 1196 bytes, a destination host address, and a destination port number. STP adds a four-byte header to each chunk and puts the port number of the destination process in this header. STP then gives the destination host address and the resulting segment to the network layer. The network layer delivers the segment to STP at the destination host. STP then examines the port number in the segment, extracts the data from the segment, and passes the data to the process identified by the port number. The segment now has two header fields: a source port field and destination port field. At the sender side, STP accepts a chunk of data not exceeding 1192 bytes, a destination host address, a source port number, and a destination port number. STP creates a segment which contains the application data, source port number, and destination port number. It then gives the segment and the destination host address to the network layer. After receiving the segment, STP at the receiving host gives the application process the application data and the source port number. No, the transport layer does not have to do anything in the core; the transport layer “lives” in the end systems. For sending a letter, the family member is required to give the delegate the letter itself, the address of the destination house, and the name of the recipient. The delegate clearly writes the recipient’s name on the top of the letter. The delegate then puts the letter in an envelope and writes the address of the destination house on the envelope. The delegate then gives the letter to the planet’s mail service. At the receiving side, the delegate receives the letter from the mail service, takes the letter out of the envelope, and takes note of the recipient name written at the top of the letter. The delegate then gives the letter to the family member with this name. No, the mail service does not have to open the envelope; it only examines the address on the envelope. Source port number y and destination port number x. An application developer may not want its application to use TCP’s congestion control, which can throttle the application’s sending rate at times of congestion. Often, designers of IP telephony and IP videoconference applications choose to run their applications over UDP because they want to avoid TCP’s congestion control. Also, some applications do not need the reliable data transfer provided by TCP. Since most firewalls are configured to block UDP traffic, using TCP for video and voice traffic lets the traffic though the firewalls. Yes. The application developer can put reliable data transfer into the application layer protocol. This would require a significant amount of work and debugging, however. Yes, both segments will be directed to the same socket. For each received segment, at the socket interface, the operating system will provide the process with the IP addresses to determine the origins of the individual segments. For each persistent connection, the Web server creates a separate “connection socket”. Each connection socket is identified with a four-tuple: (source IP address, source port number, destination IP address, destination port number). When host C receives and IP datagram, it examines these four fields in the datagram/segment to determine to which socket it should pass the payload of the TCP segment. Thus, the requests from A and B pass through different sockets. The identifier for both of these sockets has 80 for the destination port; however, the identifiers for these sockets have different values for source IP addresses. Unlike UDP, when the transport layer passes a TCP segment’s payload to the application process, it does not specify the source IP address, as this is implicitly specified by the socket identifier. Sequence numbers are required for a receiver to find out whether an arriving packet contains new data or is a retransmission. To handle losses in the channel. If the ACK for a transmitted packet is not received within the duration of the timer for the packet, the packet (or its ACK or NACK) is assumed to have been lost. Hence, the packet is retransmitted. A timer would still be necessary in the protocol rdt 3.0. If the round trip time is known then the only advantage will be that, the sender knows for sure that either the packet or the ACK (or NACK) for the packet has been lost, as compared to the real scenario, where the ACK (or NACK) might still be on the way to the sender, after the timer expires. However, to detect the loss, for each packet, a timer of constant duration will still be necessary at the sender. The packet loss caused a time out after which all the five packets were retransmitted. Loss of an ACK didn’t trigger any retransmission as Go-Back-N uses cumulative acknowledgements. The sender was unable to send sixth packet as the send window size is fixed to 5. When the packet was lost, the received four packets were buffered the receiver. After the timeout, sender retransmitted the lost packet and receiver delivered the buffered packets to application in correct order. Duplicate ACK was sent by the receiver for the lost ACK. The sender was unable to send sixth packet as the send win
Git-2.21.0-64 for windows Git 2.23 Release Notes ====================== Updates since v2.22 ------------------- Backward compatibility note * The "--base" option of "format-patch" computed the patch-ids for prerequisite patches in an unstable way, which has been updated to compute in a way that is compatible with "git patch-id --stable". * The "git log" command by default behaves as if the --mailmap option was given. UI, Workflows & Features * The "git fast-export/import" pair has been taught to handle commits with log messages in encoding other than UTF-8 better. * In recent versions of Git, per-worktree refs are exposed in refs/worktrees/<wtname>/ hierarchy, which means that worktree names must be a valid refname component. The code now sanitizes the names given to worktrees, to make sure these refs are well-formed. * "git merge" learned "--quit" option that cleans up the in-progress merge while leaving the working tree and the index still in a mess. * "git format-patch" learns a configuration to set the default for its --notes=<ref> option. * The code to show args with potential typo that cannot be interpreted as a commit-ish has been improved. * "git clone --recurse-submodules" learned to set up the submodules to ignore commit object names recorded in the superproject gitlink and instead use the commits that happen to be at the tip of the remote-tracking branches from the get-go, by passing the new "--remote-submodules" option. * The pattern "git diff/grep" use to extract funcname and words boundary for Matlab has been extend to cover Octave, which is more or less equivalent. * "git help git" was hard to discover (well, at least for some people). * The pattern "git diff/grep" use to extract funcname and words boundary for Rust has been added. * "git status" can be told a non-standard default value for the "--[no-]ahead-behind" option with a new configuration variable status.aheadBehind. * "git fetch" and "git pull" reports when a fetch results in non-fast-forward updates to let the user notice unusual situation. The commands learned "--no-show-forced-updates" option to disable this safety feature. * Two new commands "git switch" and "git restore" are introduced to split "checking out a branch to work on advancing its history" and "checking out paths out of the index and/or a tree-ish to work on advancing the current history" out of the single "git checkout" command. * "git branch --list" learned to always output the detached HEAD as the first item (when the HEAD is detached, of course), regardless of the locale. * The conditional inclusion mechanism learned to base the choice on the branch the HEAD currently is on. * "git rev-list --objects" learned the "--no-object-names" option to squelch the path to the object that is used as a grouping hint for pack-objects. * A new tag.gpgSign configuration variable turns "git tag -a" into "git tag -s". * "git multi-pack-index" learned expire and repack subcommands. * "git blame" learned to "ignore" commits in the history, whose effects (as well as their presence) get ignored. * "git cherry-pick/revert" learned a new "--skip" action. * The tips of refs from the alternate object store can be used as starting point for reachability computation now. * Extra blank lines in "git status" output have been reduced. * The commits in a repository can be described by multiple commit-graph files now, which allows the commit-graph files to be updated incrementally. * "git range-diff" output has been tweaked for easier identification of which part of what file the patch shown is about. Performance, Internal Implementation, Development Support etc. * Update supporting parts of "git rebase" to remove code that should no longer be used. * Developer support to emulate unsatisfied prerequisites in tests to ensure that the remainder of the tests still succeeds when tests with prerequisites are skipped. * "git update-server-info" learned not to rewrite the file with the same contents. * The way of specifying the path to find dynamic libraries at runtime has been simplified. The old default to pass -R/path/to/dir has been replaced with the new default to pass -Wl,-rpath,/path/to/dir, which is the more recent GCC uses. Those who need to build with an old GCC can still use "CC_LD_DYNPATH=-R" * Prepare use of reachability index in topological walker that works on a range (A..B). * A new tutorial targeting specifically aspiring git-core developers has been added. * Auto-detect how to tell HP-UX aCC where to use dynamically linked libraries from at runtime. * "git mergetool" and its tests now spawn fewer subprocesses. * Dev support update to help tracing out tests. * Support to build with MSVC has been updated. * "git fetch" that grabs from a group of remotes learned to run the auto-gc only once at the very end. * A handful of Windows build patches have been upstreamed. * The code to read state files used by the sequencer machinery for "git status" has been made more robust against a corrupt or stale state files. * "git for-each-ref" with multiple patterns have been optimized. * The tree-walk API learned to pass an in-core repository instance throughout more codepaths. * When one step in multi step cherry-pick or revert is reset or committed, the command line prompt script failed to notice the current status, which has been improved. * Many GIT_TEST_* environment variables control various aspects of how our tests are run, but a few followed "non-empty is true, empty or unset is false" while others followed the usual "there are a few ways to spell true, like yes, on, etc., and also ways to spell false, like no, off, etc." convention. * Adjust the dir-iterator API and apply it to the local clone optimization codepath. * We have been trying out a few language features outside c89; the coding guidelines document did not talk about them and instead had a blanket ban against them. * A test helper has been introduced to optimize preparation of test repositories with many simple commits, and a handful of test scripts have been updated to use it. Fixes since v2.22 ----------------- * A relative pathname given to "git init --template=<path> <repo>" ought to be relative to the directory "git init" gets invoked in, but it instead was made relative to the repository, which has been corrected. * "git worktree add" used to fail when another worktree connected to the same repository was corrupt, which has been corrected. * The ownership rule for the file descriptor to fast-import remote backend was mixed up, leading to an unrelated file descriptor getting closed, which has been fixed. * A "merge -c" instruction during "git rebase --rebase-merges" should give the user a chance to edit the log message, even when there is otherwise no need to create a new merge and replace the existing one (i.e. fast-forward instead), but did not. Which has been corrected. * Code cleanup and futureproof. * More parameter validation. * "git update-server-info" used to leave stale packfiles in its output, which has been corrected. * The server side support for "git fetch" used to show incorrect value for the HEAD symbolic ref when the namespace feature is in use, which has been corrected. * "git am -i --resolved" segfaulted after trying to see a commit as if it were a tree, which has been corrected. * "git bundle verify" needs to see if prerequisite objects exist in the receiving repository, but the command did not check if we are in a repository upfront, which has been corrected. * "git merge --squash" is designed to update the working tree and the index without creating the commit, and this cannot be countermanded by adding the "--commit" option; the command now refuses to work when both options are given. * The data collected by fsmonitor was not properly written back to the on-disk index file, breaking t7519 tests occasionally, which has been corrected. * Update to Unicode 12.1 width table. * The command line to invoke a "git cat-file" command from inside "git p4" was not properly quoted to protect a caret and running a broken command on Windows, which has been corrected. * "git request-pull" learned to warn when the ref we ask them to pull from in the local repository and in the published repository are different. * When creating a partial clone, the object filtering criteria is recorded for the origin of the clone, but this incorrectly used a hardcoded name "origin" to name that remote; it has been corrected to honor the "--origin <name>" option. * "git fetch" into a lazy clone forgot to fetch base objects that are necessary to complete delta in a thin packfile, which has been corrected. * The filter_data used in the list-objects-filter (which manages a lazily sparse clone repository) did not use the dynamic array API correctly---'nr' is supposed to point at one past the last element of the array in use. This has been corrected. * The description about slashes in gitignore patterns (used to indicate things like "anchored to this level only" and "only matches directories") has been revamped. * The URL decoding code has been updated to avoid going past the end of the string while parsing %-<hex>-<hex> sequence. * The list of for-each like macros used by clang-format has been updated. * "git branch --list" learned to show branches that are checked out in other worktrees connected to the same repository prefixed with '+', similar to the way the currently checked out branch is shown with '*' in front. (merge 6e9381469e nb/branch-show-other-worktrees-head later to maint). * Code restructuring during 2.20 period broke fetching tags via "import" based transports. * The commit-graph file is now part of the "files that the runtime may keep open file descriptors on, all of which would need to be closed when done with the object store", and the file descriptor to an existing commit-graph file now is closed before "gc" finalizes a new instance to replace it. * "git checkout -p" needs to selectively apply a patch in reverse, which did not work well. * Code clean-up to avoid signed integer wraparounds during binary search. * "git interpret-trailers" always treated '#' as the comment character, regardless of core.commentChar setting, which has been corrected. * "git stash show 23" used to work, but no more after getting rewritten in C; this regression has been corrected. * "git rebase --abort" used to leave refs/rewritten/ when concluding "git rebase -r", which has been corrected. * An incorrect list of options was cached after command line completion failed (e.g. trying to complete a command that requires a repository outside one), which has been corrected. * The code to parse scaled numbers out of configuration files has been made more robust and also easier to follow. * The codepath to compute delta islands used to spew progress output without giving the callers any way to squelch it, which has been fixed. * Protocol capabilities that go over wire should never be translated, but it was incorrectly marked for translation, which has been corrected. The output of protocol capabilities for debugging has been tweaked a bit. * Use "Erase in Line" CSI sequence that is already used in the editor support to clear cruft in the progress output. * "git submodule foreach" did not protect command line options passed to the command to be run in each submodule correctly, when the "--recursive" option was in use. * The configuration variable rebase.rescheduleFailedExec should be effective only while running an interactive rebase and should not affect anything when running a non-interactive one, which was not the case. This has been corrected. * The "git clone" documentation refers to command line options in its description in the short form; they have been replaced with long forms to make them more recognisable. * Generation of pack bitmaps are now disabled when .keep files exist, as these are mutually exclusive features. (merge 7328482253 ew/repack-with-bitmaps-by-default later to maint). * "git rm" to resolve a conflicted path leaked an internal message "needs merge" before actually removing the path, which was confusing. This has been corrected. * "git stash --keep-index" did not work correctly on paths that have been removed, which has been fixed. (merge b932f6a5e8 tg/stash-keep-index-with-removed-paths later to maint). * Window 7 update ;-) * A codepath that reads from GPG for signed object verification read past the end of allocated buffer, which has been fixed. * "git clean" silently skipped a path when it cannot lstat() it; now it gives a warning. * "git push --atomic" that goes over the transport-helper (namely, the smart http transport) failed to prevent refs to be pushed when it can locally tell that one of the ref update will fail without having to consult the other end, which has been corrected. * The internal diff machinery can be made to read out of bounds while looking for --function-context line in a corner case, which has been corrected. (merge b777f3fd61 jk/xdiff-clamp-funcname-context-index later to maint). * Other code cleanup, docfix, build fix, etc. (merge fbec05c210 cc/test-oidmap later to maint). (merge 7a06fb038c jk/no-system-includes-in-dot-c later to maint). (merge 81ed2b405c cb/xdiff-no-system-includes-in-dot-c later to maint). (merge d61e6ce1dd sg/fsck-config-in-doc later to maint).
Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to:  Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.  Describe the various transaction types and how transactions differ from batches.  Describe how to troubleshoot blocking and locking issues.  Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.  Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Describe locking architecture used by SQL Server.  Identify the various lock modes used by SQL Server.  Discuss lock compatibility and concurrent access.  Identify different types of lock resources.  Discuss dynamic locking and lock escalation.  Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Recommended Reading  Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  SOX000821700049 – SQL 7.0 How to interpret lock resource Ids  SOX000925700237 – TITLE: Lock escalation in SQL 7.0  SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes. Durability After a transaction commits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode. Lock Compatibility These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions. RID A Row Identifier. Used to individually lock a single row within a table. Application A lock resource defined by an application. The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked. Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk Operation Lock for Database [BULK-OP-LOG] – Bulk Operation Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [COMPILE] – Compile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decoded as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans. Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level. The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification. Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE:  Range scan query  Singleton fetch of nonexistent row  Delete operation  Insert operation However, the following conditions must be satisfied before key-range locking can occur:  The transaction-isolation level must be set to SERIALIZABLE.  The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed. Lock Escalation When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order details] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed. Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock. Locking – Read Committed Scan Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously. For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output. For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command. Spinlock A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:  The old thread’s state must be saved  The new thread’s state must be reloaded  The data stored in the L1 and L2 cache are useless to the processor On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Review batch processing and error checking.  Review explicit, implicit and autocommit transactions and transaction nesting level.  Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.  Discuss various transaction isolation level and their impact on locking.  Discuss the difference between aborting a statement, a transaction, and a batch.  Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling. Recommended Reading  Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.  Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as executed within the batch. Autocommit, Explicit, and Implicit Transaction Autocommit Transaction Mode (Default) Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed. Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure. Let us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next command in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued. On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next command in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation. When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3) COMMIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6) COMMIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 added. Key 2 insert failed and was rolled back, but XACT_ABORT was OFF and rest of transaction succeeded. Key 5 insert error with XACT_ABORT ON caused all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Returned to indicate that the batch is aborted. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO Compile and Run-time Errors Compile Errors Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan. Runtime Errors Runtime errors are those that are encountered during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution. In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time. When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is incompatible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more details about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary. Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror Details Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved). Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire. You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE. Warning It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger. Transaction Count The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert). In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are based on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements. First Example In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Second Example The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Third Example In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go delete from tbl50000Insert Fourth Example For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Explain how lock hints are used and their impact.  Discuss the effect on locking when an application uses Microsoft Transaction Server.  Identify the different kinds of deadlocks including distributed deadlock. Recommended Reading  Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detected by SQL Server Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks. This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not compatible with HOLDLOCK.  Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were returned? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself. Deadlock Resolution How is a deadlock resolved? SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:  Clients complain of mysteriously canceled queries when using certain features of an application.  May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim. Warning Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distributed Deadlock – Scenario 1 Distributed Deadlocks The term distributed deadlock is ambiguous. There are many types of distributed deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distributed Deadlock – Scenario 2 Scenario 2 Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60. This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distributed Deadlock – Scenario 3 Scenario 3 Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60. Note No version of SQL Server is currently designed to detect this distributed deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Identify specific information needed for troubleshooting issues.  Locate and collect information needed for troubleshooting issues.  Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.  Review information collected from master.dbo.sysprocesses table.  Review information collected from master.dbo.syslockinfo table.  Review output of sp_who, sp_who2, sp_lock.  Analyze Profiler log for query usage pattern.  Review output of trace flags to help troubleshoot deadlocks. Recommended Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler Locking and Blocking  Try This: Examine Blocked Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the blocked process?  Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems  Users complain about poor performance at a certain time of day, or after a certain number of users connect.  SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.  More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.  Possibl

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 10
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Izrj

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值