Automating UI Testing in Instruments User Guid(官方)

When you automate tests of UI interactions, you free critical staff and resources for other work. In this way you maximize productivity, minimize procedural errors, and shorten the amount of time needed to develop product updates.

You can use the Automation instrument to automate user interface tests in your iOS app through test scripts that you write. These scripts run outside of your app and simulate user interaction by calling the UI Automation API, a JavaScript programming interface that specifies actions to be performed in your app as it runs in the simulator or on a connected device. Your test scripts return log information to the host computer about the actions performed. You can even integrate the Automation instrument with other instruments to perform sophisticated tests such as tracking down memory leaks and isolating causes of performance problems.

This chapter describes how you use the Automation template in Instruments to execute scripts. The Automation trace template executes a script which simulates UI interaction for an iOS app launched from Instruments. It consists of the Automation instrument only.

This chapter also explains how to integrate your scripts with the UI Automation programming interface in order to verify that your app can do the following:

  • Access its UI element hierarchy

  • Add timing flexibility by having timeout periods

  • Log and verify the information returned by Instruments

  • Handle alerts properly

  • Handle changes in device orientation gracefully

  • Handle multitasking

The Automation instrument provides powerful features, including:

  • Script editing with a built-in script editor

  • Capturing (recording) user interface actions for use in automation scripts

  • Running a test script from an Xcode project

  • Powerful API features, including the ability to simulate a device location change and to execute a task from the Automation instrument on the host

As you work through this chapter, look for more detailed information about each class in UI Automation JavaScript Reference for iOS. For an overview of UI Automation with JavaScript, see JavaScript for Automation Release Notes. For some sample automation projects, see JavaScript for Automation WWDC 2014 Demos.

Note: The Automation instrument works only with apps that have been code signed with a development provisioning profile when they are built in Xcode. Apps signed with a distribution provisioning profile cannot be controlled with the UI Automation programming interface. However, because scripts run outside your app, the app version you are testing can be the same one you submit to the App Store, as long as you rebuild it with the distribution profile.

Important: Simulated actions may not prevent your test device from auto-locking. To ensure that auto-locking does not happen, before running tests on a device, you should set the Auto-Lock setting to Never.

iOS 8 Enhancement: iOS 8 includes a new Enable UI Automation preference under Settings > Developer, which allows third-party developers finer control of when their devices are available to perform automation. For physical iOS devices, this setting is off by default and must be enabled prior to performing any UI Automation. In the simulator, the setting is enabled by default.

Writing, Exporting, and Importing Automation Test Scripts

It’s easy to write your own scripts inside Instruments. The built-in script editor in the Automation instrument allows you to create and edit new test scripts in your trace document, as well as import existing ones.

bullet
To create a new script

After you create a script, you will want to use it throughout the development of your app. You do this by saving your configured trace document (which includes your script) and opening it again whenever you want to test your app. Or, you can export your test script and import it into a new trace document when you need it.

bullet
To export a script to a file on a disk
bullet
To import a previously saved script

Loading Saved Automation Test Scripts

You write your Automation tests in JavaScript, using the UI Automation JavaScript library to specify actions that should be performed in your app as it runs. You can create as many scripts as you like and include them in your trace document, but you can run only one script at a time. The API does, however, offer a #import directive that allows you to write smaller, reusable discrete test scripts. For example, if you define commonly used functions in a file named TestUtilities.js, you can make those functions available for use in your test script by including in that script the line:

#import "<path-to-library-folder>/TestUtilities.js"

Changes you make with the script editor are saved when you save your trace document. For scripts created in the editor, changes are saved as part of the trace document itself. To save those changes in a file you can access on disk, you have to export the script. See To export a script to a file on a disk above.

Recording Manual User Interface Actions into Automation Scripts

A capture feature simplifies script development by allowing you to record actions that you perform on a target iOS device or in iOS Simulator. To use this feature, create an Automation trace document and then capture actions that you perform on the device. These captured actions are incorporated into your script as expressions that you can edit.

bullet
To record manual user interface actions
../Art/instruments_automation_recorded_code_2x.png

The Automation instrument generates expressions in your script for the actions you perform. Some of these expressions include tokens that contain alternative syntax for the expression. To see the alternative syntax, select the arrow at the right of the token. To select the currently displayed syntax for a token and flatten the expression, double-click the token.

To configure the Automation instrument to automatically start and stop your script under control of the Instruments Record button in the toolbar, select the “Run on Record” checkbox.

If your app crashes or goes to the background, your script is blocked until the app is frontmost again, at which time the script continues to run.

Important: You must explicitly stop recording, either with the Stop button or by selecting Stop when Run Completes (not the default). Completion or termination of your script does not turn off recording.

Accessing and Manipulating UI Elements

The Accessibility-based mechanism underlying the UI Automation feature represents every control in your app as a uniquely identifiable element. To perform an action on an element in your app, you explicitly identify that element in terms of the app’s element hierarchy. To fully understand this section, you should be familiar with the information in iOS Human Interface Guidelines.

To illustrate the element hierarchy, this section refers to the Recipes iOS app shown in Figure 11-1, which is available as the code sample iPhoneCoreDataRecipes from the iOS Dev Center.

Figure 11-1  The Recipes app (Recipes screen)

UI Element Accessibility

Each accessible element is inherited from the base element, UIAElement. Every element can contain zero or more other elements.

As detailed below, your script can access individual elements by their position within the element hierarchy. However, you can assign a unique name to each element by setting the label attribute and making sure Accessibility is selected in Interface Builder for the control represented by that element, as shown in Figure 11-2.

Figure 11-2  Setting the accessibility label in Interface Builder

UI Automation uses the accessibility label (if it’s set) to derive a name property for each element. Aside from the obvious benefits, using such names can greatly simplify development and maintenance of your test scripts.

The name property is one of four properties of these elements that can be very useful in your test scripts.

  • name. Derived from the accessibility label

  • value. The current value of the control, for example, the text in a text field

  • elements. Any child elements contained within the current element, for example, the cells in a table view

  • parent. The element that contains the current element

Understanding the Element Hierarchy

At the top of the element hierarchy is the UIATarget class, which represents the high-level user interface elements of the system under test (SUT)—that is, the device (or simulator) as well as the iOS and your app running on that device. For the purposes of your test, your app is the frontmost app (or target app), identified as follows:

UIATarget.localTarget().frontMostApp();

To reach the app window, the main window of your app, you would specify

UIATarget.localTarget().frontMostApp().mainWindow();

At startup, the Recipes app window appears as shown in Figure 11-1.

Inside the window, the recipe list is presented in an individual view, in this case, a table view, see Figure 11-3.

Figure 11-3  Recipes table view

This is the first table view in the app’s array of table views, so you specify it as such using the zero index ([0]), as follows:

UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0];

Inside the table view, each recipe is represented by a distinct individual cell. You can specify individual cells in similar fashion. For example, using the zero index ([0]), you can specify the first cell as follows:

UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0].cells()[0];

Each of these individual cell elements is designed to contain a recipe record as a custom child element. in this first cell is the record for chocolate cake, which you can access by name with this line of code:

UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0].cells()[0].elements()["Chocolate Cake"];
../Art/ios_recipes_app_recipes_table_view_row_2x.png
Displaying the Element Hierarchy

You can use the logElementTree method for any element to list all of its child elements. The following code illustrates listing the elements for the main (Recipes) screen (or mode) of the Recipes app.

// List element hierarchy for the Recipes screen
UIALogger.logStart("Logging element tree ...");
UIATarget.localTarget().logElementTree();
UIALogger.logPass();

The output of the command is captured in the log displayed by the Automation instrument, as in Figure 11-4.

Figure 11-4  Output from the  logElementTree method

Note the indentation of each element line item, indicating that element’s level in the hierarchy. These levels may be viewed conceptually, as in Figure 11-5.

Figure 11-5  Element hierarchy (Recipes screen)

Although a screen is not technically an iOS programmatic construct and doesn’t explicitly appear in the hierarchy, it is a helpful concept in understanding that hierarchy. Tapping the Unit Conversion tab in the tab bar displays the Unit Conversion screen (or mode), shown in Figure 11-6.

Figure 11-6  Recipes app (Unit Conversion screen)

The following code taps the Unit Conversion tab in the tab bar to display the associated screen and then logs the element hierarchy associated with it:

// List element hierarchy for the Unit Conversion screen
var target = UIATarget.localTarget();
var appWindow = target.frontMostApp().mainWindow();
var element = target;
appWindow.tabBar().buttons()["Unit Conversion"].tap();
UIALogger.logStart("Logging element tree …");
element.logElementTree();
UIALogger.logPass();

The resulting log reveals the hierarchy to be as illustrated in Figure 11-7. Just as with the previous example, logElementTree is called for the target, but the results are for the current screen—in this case, the Unit Conversion screen.

Figure 11-7  Element hierarchy (Unit Conversion screen)
Simplifying Element Hierarchy Navigation

The previous code sample introduces the use of variables to represent parts of the element hierarchy. This technique allows for shorter, simpler commands in your scripts.

Using variables in this way also allows for some abstraction, yielding flexibility in code use and reuse. The following example uses a variable (destinationScreen) to control changing between the two main screens (Recipes and Unit Conversion) of the Recipes app:

// Switch screen (mode) based on value of variable
var target = UIATarget.localTarget();
var app = target.frontMostApp();
var tabBar = app.mainWindow().tabBar();
var destinationScreen = "Recipes";
if (tabBar.selectedButton().name() != destinationScreen) {
    tabBar.buttons()[destinationScreen].tap();
}

With minor variations, this code could work, for example, for a tab bar with more tabs or with tabs of different names.

Performing User Interface Gestures

Once you understand how to access the desired element, it’s relatively simple and straightforward to manipulate that element.

The UI Automation API provides methods to perform most UIKit user actions, including multi-touch gestures. For comprehensive detailed information about these methods, see UI Automation JavaScript Reference for iOS.

Tapping. Perhaps the most common touch gesture is a simple tap. Implementing a one-finger single tap on a known UI element is very simple. For example, tapping the right button, labeled with a plus sign (+), in the navigation bar of the Recipes app, displays a new screen used to add a new recipe.

../Art/ios_recipes_app_navigation_bar_2x.png

This command is all that’s required to tap that button:

UIATarget.localTarget().frontMostApp().navigationBar().buttons()["Add"].tap();

Note that it uses the name Add to identify the button, presuming that the accessibility label has been set appropriately, as described above.

Of course, more complicated tap gestures are required to thoroughly test any sophisticated app. You can specify any standard tap gestures. For example, to tap once at an arbitrary location on the screen, you just need to provide the screen coordinates:

UIATarget.localTarget().tap({x:100, y:200});

This command taps at the x and y coordinates specified, regardless of what's at that location on the screen.

More complex taps are also available. To double-tap the same location, you could use this code:

UIATarget.localTarget().doubleTap({x:100, y:200});

And to perform a two-finger tap to test zooming in and out, for example, you could use this code:

UIATarget.localTarget().twoFingerTap({x:100, y:200});

Pinching. A pinch open gesture is typically used to zoom in or expand an object on the screen, and a pinch close gesture is used for the opposite effect—to zoom out or shrink an object on the screen. You specify the coordinates to define the start of the pinch close gesture or end of the pinch open gesture, followed by a number of seconds for the duration of the gesture. The duration parameter allows you some flexibility in specifying the speed of the pinch action.

UIATarget.localTarget().pinchOpenFromToForDuration({x:20, y:200}, {x:300, y:200}, 2);
UIATarget.localTarget().pinchCloseFromToForDuration({x:20, y:200}, {x:300, y:200}, 2);

Dragging and flicking. If you need to scroll through a table or move an element on screen, you can use the dragFromToForDuration method. You provide coordinates for the starting location and ending location, as well as a duration, in seconds. The following example specifies a drag gesture from location 160, 200 to location 160, 400, over a period of 1 second:

UIATarget.localTarget().dragFromToForDuration({x:160, y:200}, {x:160, y:400}, 1);

A flick gesture is similar, but it is presumed to be a fast action, so it doesn’t require a duration parameter.

UIATarget.localTarget().flickFromTo({x:160, y:200}, {x:160, y:400});

Entering text. Your script will likely need to test that your app handles text input correctly. To do so, it can enter text into a text field by simply specifying the target text field and setting its value with the setValue method. The following example uses a local variable to provide a long string as a test case for the first text field (index [0]) in the current screen:

var recipeName = "Unusually Long Name for a Recipe";
UIATarget.localTarget().frontMostApp().mainWindow().textFields()[0].setValue(recipeName);

Navigating in your app with tabs. To test navigating between screens in your app, you’ll very likely need to tap a tab in a tab bar. Tapping a tab is much like tapping a button; you access the appropriate tab bar, specify the desired button, and tap that button, as shown in the following example:

var tabBar = UIATarget.localTarget().frontMostApp().mainWindow().tabBar();
var selectedTabName = tabBar.selectedButton().name();
if (selectedTabName != "Unit Conversion")  {
    tabBar.buttons()["Unit Conversion"].tap();
}

First, a local variable is declared to represent the tab bar. Using that variable, the script accesses the tab bar to determine the selected tab and get the name of that tab. Finally, if the name of the selected tab matches the name of the desired tab (in this case “Unit Conversion”), the script taps that tab.

Scrolling to an element. Scrolling is a large part of a user’s interaction with many apps. UI Automation provides a variety of methods for scrolling. The basic methods allow for scrolling to the next element left, right, up, or down. More sophisticated methods support greater flexibility and specificity in scrolling actions. One such method is scrollToElementWithPredicate, which allows you to scroll to an element that meets certain criteria that you specify. This example accesses the appropriate table view through the element hierarchy and scrolls to a recipe in that table view whose name starts with “Turtle Pie.”

UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0] \
    .scrollToElementWithPredicate("name beginswith 'Turtle Pie'");

Using the scrollToElementWithPredicate method allows scrolling to an element whose exact name may not be known.

Using predicate functionality can significantly expand the capability and applicability of your scripts. For more information on using predicates, see Predicate Programming Guide.

Other useful methods for flexibility in scrolling include scrollToElementWithName and scrollToElementWithValueForKey. See UIAScrollView Class Reference for more information.

Accessibility Label and Identifier Attributes

The label attribute and identifier attribute figure prominently in your script’s ability to access UI elements, so it’s a good idea to understand how they are used.

Setting a meaningful value for the label attribute is optional but recommended. You can set and view the label string in the Label text field in the Accessibility section of the Identity inspector in Interface Builder. This label is expected to be descriptive but short, partly because assistive technologies such as Apple’s VoiceOver use it as the name of the associated UI element. In UI Automation, this label is returned by the label method. It is also returned by the name method as a default if the identifier attribute is not set. For an overview of accessibility labels, see Tic Tac Toe: Creating Accessible Apps with Custom UI and Accessibility Programming Guide for iOS. For reference details, see UIAccessibilityElement Class Reference.

The identifier attribute allows you to use more descriptive names for elements. It is optional, but it must be set for the script to perform either of these two operations:

  • Accessing a container view by name while also being able to access its children

  • Accessing a UILabel view by name to obtain its displayed text (through its value attribute)

In UI Automation, the name method returns the value of this identifier attribute, if one is set. If it is not set, the name method returns the value of the label attribute.

Currently, you can set a value for the identifier attribute only programmatically, through the accessibilityIdentifier property. For details, see UIAccessibilityIdentification Protocol Reference.

Adding Timing Flexibility with Timeout Periods

While executing a test script, an attempt to access an element can fail for a variety of reasons. For example, an action could fail if:

  • The app is still in the process of launching.

  • A new screen hasn’t yet been completely drawn.

  • The element (such as a button your script is trying to click) may be drawn, but its contents are not filled in or updated yet.

In situations like these, your script may need to wait for some action to complete before proceeding. In the Recipes app, for example, the user taps the Recipes tab to return from the Unit Conversion screen to the Recipes screen. However, UI Automation may detect the existence of the Add button, enabling the test script to attempt to tap it—before the button is actually drawn and the app is actually ready to accept that tap. An accurate test must ensure that the Recipes screen is completely drawn and that the app is ready to accept user interaction with the controls within that screen before proceeding.

To provide some flexibility in such cases and to give you finer control over timing, UI Automation provides for a timeout period, a period during which it repeatedly attempts to perform the specified action before failing. If the action completes during the timeout period, that line of code returns, and your script can proceed. If the action doesn’t complete during the timeout period, an exception is thrown and UI Automation returns a UIAElementNil object. A UIAElementNil object is always considered invalid.

The default timeout period is five seconds, but your script can change that at any time. For example, you might decrease the timeout period if you want to test whether an element exists but don’t need to wait if it isn’t. On the other hand, you might increase the timeout period when the script must access an element but the user interface is slow to update. The following methods for manipulating the timeout period are available in the UIATarget class:

  • timeout: Returns the current timeout value.

  • setTimeout: Sets a new timeout value.

  • pushTimeout: Stores the current timeout value on a stack and sets a new timeout value.

  • popTimeout: Retrieves the previous timeout value from a stack, restores it as the current timeout value, and returns it.

To make this feature as easy as possible to use, UI Automation uses a stack model. You push a custom timeout period to the top of the stack, as with the following code that shortens the timeout period to two seconds.

UIATarget.localTarget().pushTimeout(2);

You then run the code to perform the action and pop the custom timeout off the stack.

UIATarget.localTarget().popTimeout();

Using this approach you end up with a robust script, waiting a reasonable amount of time for something to happen.

Note: Although using explicit delays is typically not encouraged, on occasion it may be necessary. The following code shows how you specify a delay of 2 seconds:

UIATarget.localTarget().delay(2);

For more details, see UIATarget Class Reference.

Note: A timeout value of 0 immediately returns a UIAElementNil object if the initial attempt to access the element fails.

Logging Test Results and Data

Your script reports log information to the Automation instrument, which gathers it and reports it back for analysis.

When writing your tests, you should log as much information as you can, if just to help you diagnose any failures that occur. At a bare minimum, you should log when each test begins and ends, identifying the test performed and recording pass/fail status. This kind of minimal logging is almost automatic in UI Automation. You simply call logStart with the name of your test, run your test, then call logPass or logFail as appropriate, as shown in the following example:

var testName = "Module 001 Test";
UIALogger.logStart(testName);
//some test code
UIALogger.logPass(testName);

But it’s a good practice to log what transpires whenever your script interacts with a control. Whether you’re validating that parts of your app perform properly or you’re still tracking down bugs, it’s hard to imagine having too much log information to analyze. To this end, you can log just about any occurrence using logMessage, and you can even supplement the textual data with screenshots.

The following code example expands the logging of the previous example to include a free-form log message and a screenshot:

var testName = "Module 001 Test";
UIALogger.logStart(testName);
//some test code
UIALogger.logMessage("Starting Module 001 branch 2, validating input.");
//capture a screenshot with a specified name
UIATarget.localTarget().captureScreenWithName("SS001-2_AddedIngredient");
//more test code
UIALogger.logPass(testName);

The screenshot requested in the example would be saved back to Instruments and appear in the Editor Log in the detail pane with the specified filename (SS001-2_AddedIngredient.png, in this case).

Using Screenshots

Your script can capture screenshots using the captureScreenWithName and captureRectWithName methods in the UIATarget class. To ensure easy access to those screenshots, open the Logging section at the left of the template, select the Continuously Log Results option, and use the Choose Location pop-up menu to specify a folder for the log results. Each captured screenshot is stored in the results folder with the name specified by your script.

Note: The results folder pathname can be set from the command line using the UIARESULTSPATH environment variable. Setting this variable overrides the results folder setting in the trace template.

Verifying Test Results

The crux of testing is being able to verify that each test has been performed and that it has either passed or failed. This code example runs the test testName to determine whether a valid element recipe element whose name starts with “Tarte” exists in the recipe table view. First, a local variable is used to specify the cell criteria:

var cell = UIATarget.localTarget().frontMostApp().mainWindow() \
    .tableViews()[0].cells().firstWithPredicate("name beginswith 'Tarte'");

Next, the script uses the isValid method to test whether a valid element matching those criteria exists in the recipe table view.

if (cell.isValid()) {
    UIALogger.logPass(testName);
}
else {
    UIALogger.logFail(testName);
}

If a valid cell is found, the code logs a pass message for the testName test; if not, it logs a failure message.

Notice that this test specifies firstWithPredicate and "name beginsWith 'Tarte'". These criteria yield a reference to the cell for “Tarte aux Fraises,” which works for the default data already in the Recipes sample app. If, however, a user adds a recipe for “Tarte aux Framboises,” this example may or may not give the desired results.

Handling Alerts

In addition to verifying that your app’s alerts perform properly, your test should accommodate alerts that appear unexpectedly from outside your app. For example, it’s not unusual to get a text message while checking the weather or playing a game.

Handling Externally Generated Alerts

Although it may seem somewhat paradoxical, your app and your tests should expect that unexpected alerts will occur whenever your app is running. Fortunately, UI Automation includes a default alert handler that renders external alerts very easy for your script to cope with. Your script provides an alert handler function called onAlert, which is called when the alert has occurred, at which time it can take any appropriate action, and then then simply return the alert to the default handler for dismissal.

The following code example illustrates a very simple alert case:

UIATarget.onAlert = function onAlert(alert) {
    var title = alert.name();
    UIALogger.logWarning("Alert with title '" + title + "' encountered.");
    // return false to use the default handler
    return false;
}

All this handler does is log a message that this type of alert was received and then return false. Returning false directs the UI Automation default alert handler to just dismiss the alert. In the case of an alert for a received text message, for example, UI Automation clicks the Close button.

Note: The default handler stops dismissing alerts after reaching an upper limit of sequential alerts. In the unlikely case that your test reaches this limit, you should investigate possible problems with your testing environment and procedures.

Handling Internally Generated Alerts

As part of your app, you will have alerts that need to be handled. In those instances, your alert handler needs to perform the appropriate response and return true to the default handler, indicating that the alert has been handled.

The following code example expands slightly on the basic alert handler. After logging the alert type, it tests whether the alert is the specific one that’s anticipated. If so, it taps the Continue button, which is known to exist, and returns true to skip the default dismissal action.

UIATarget.onAlert = function onAlert(alert) {
    var title = alert.name();
    UIALogger.logWarning("Alert with title '" + title + "' encountered.");
    if (title == "The Alert We Expected") {
        alert.buttons()["Continue"].tap();
        return true;  //alert handled, so bypass the default handler
    }
    // return false to use the default handler
    return false;
}

This basic alert handler can be generalized to respond to just about any alert received, while allowing your script to continue running.

Detecting and Specifying Device Orientation

A well-behaved iOS app is expected to handle changes in device orientation gracefully, so your script should anticipate and test for such changes.

UI Automation provides setDeviceOrientation to simulate a change in the device orientation. This method uses the constants listed in Table 11-1.

Note: With regard to device orientation handling, it bears repeating that the functionality is entirely simulated in software. Hardware features such as raw accelerometer data are both unavailable to this UI Automation feature and unaffected by it.

Table 11-1  Device orientation constants

Orientation constant

Description

UIA_DEVICE_ORIENTATION_UNKNOWN

The orientation of the device cannot be determined.

UIA_DEVICE_ORIENTATION_PORTRAIT

The device is in portrait mode, with the device upright and the home button at the bottom.

UIA_DEVICE_ORIENTATION_PORTRAIT_UPSIDEDOWN

The device is in portrait mode but upside down, with the device upright and the home button at the top.

UIA_DEVICE_ORIENTATION_LANDSCAPELEFT

The device is in landscape mode, with the device upright and the home button on the right side.

UIA_DEVICE_ORIENTATION_LANDSCAPERIGHT

The device is in landscape mode, with the device upright and the home button on the left side.

UIA_DEVICE_ORIENTATION_FACEUP

The device is parallel to the ground with the screen facing upward.

UIA_DEVICE_ORIENTATION_FACEDOWN

The device is parallel to the ground with the screen facing downward.

In contrast to device orientation is interface orientation, which represents the rotation required to keep your app's interface oriented properly upon device rotation. Note that in landscape mode, device orientation and interface orientation are opposite because rotating the device requires rotating the content in the opposite direction.

UI Automation provides the interfaceOrientation method to get the current interface orientation. This method uses the constants listed in Table 11-2.

Table 11-2  Interface orientation constants

Orientation constant

Description

UIA_INTERFACE_ORIENTATION_PORTRAIT

The interface is in portrait mode, with the bottom closest to the home button.

UIA_INTERFACE_ORIENTATION_PORTRAIT_UPSIDEDOWN

The interface is in portrait mode but upside down, with the top closest to the home button.

UIA_INTERFACE_ORIENTATION_LANDSCAPELEFT

The interface is in landscape mode, with the left side closest to the home button.

UIA_INTERFACE_ORIENTATION_LANDSCAPERIGHT

The interface is in landscape mode, with the right side closest to the home button.

The following example changes the device orientation (in this case, to landscape left), then changes it back (to portrait):

var target = UIATarget.localTarget();
var app = target.frontMostApp();
//set orientation to landscape left
target.setDeviceOrientation(UIA_DEVICE_ORIENTATION_LANDSCAPELEFT);
UIALogger.logMessage("Current orientation now " + app.interfaceOrientation());
//reset orientation to portrait
target.setDeviceOrientation(UIA_DEVICE_ORIENTATION_PORTRAIT);
UIALogger.logMessage("Current orientation now " + app.interfaceOrientation());

Of course, once you've rotated, you do need to rotate back again.

When performing a test that involves changing the orientation of the device, it is a good practice to set the rotation at the beginning of the test, then set it back to the original rotation at the end of your test. This practice ensures that your script is always back in a known state.

You may have noticed the orientation logging in the example. Such logging provides additional assurance that your tests—and your testers—don’t become disoriented.

Testing for Multitasking

When a user exits your app by tapping the Home button or causing some other app to come to the foreground, your app is suspended. To simulate this occurrence, UI Automation provides the deactivateAppForDuration method. You just call this method, specifying a duration, in seconds, for which your app is to be suspended, as illustrated by the following example:

UIATarget.localTarget().deactivateAppForDuration(10);

This single line of code causes the app to be deactivated for 10 seconds, just as though a user had exited the app and returned to it 10 seconds later.

Running a Test Script from an Xcode Project

You can easily automate running your test script by creating a custom Automation instrument template.

Creating a Custom Automation Instrument Template

To create a custom Automation instrument template:

  1. Launch the Instruments app.

  2. Choose the Automation template to create a trace document.

    Note: Alternatively, you can add the Automation instrument to an existing trace template from the UI Automation group of the instrument library and drag it to your trace document.

  3. Choose View > Detail, if necessary, to display the detail view.

  4. Select your script from the list.

    Note: If your script is not in the list, you can import it (choose Add > Import) or create a new one (choose Add > Create).

  5. Edit your script as needed in the Script area of the detail pane.

  6. Choose File > Save as Template, name the template, and save it to the default Instruments template location:

    ~/Library/Application Support/Instruments/Templates/

Executing an Automation Instrument Script in Xcode

After you have created your customized Automation template, you can execute your test script from Xcode by following these steps:

  1. Open your project in Xcode.

  2. From the Scheme pop-up menu (in the workspace window toolbar), select Edit Scheme for a scheme with which you would like to use your script.

  3. Select Profile from the left column of the scheme editing dialog.

  4. Choose your application from the Executable pop-up menu.

  5. Choose your customized Automation Instrument template from the Instrument pop-up menu.

  6. Click OK to approve your changes and dismiss the scheme editor dialog.

  7. Choose Product > Profile.

    Instruments launches and executes your test script.

Executing an Automation Instrument Script from the Command Line

You can also execute your test script from the command line. If you have created a customized Automation template as described in Creating a Custom Automation Instrument Template, you can use the following simple command:

instruments -w deviceID -t templateFilePath targetAppName

deviceID

The 40-character device identifier, available in the Xcode Devices organizer, and in iTunes.

Note: Omit the device identifier option (-w deviceID in this example) to target the Simulator instead of a device.

templateFilePath

The full pathname of your customized Automation template, by default, ~/Library/Application Support/Instruments/Templates/templateName, where templateName is the name you saved it with.

targetAppName

The local name of the application. When targeting a device, omit the pathname and .app extension. When targeting a simulator, use the full pathname.

You can use the default trace template if you don’t want to create a custom one. To do so, you use the environment variables UIASCRIPT and UIARESULTSPATH to identify the script and the results directory.

instruments -w deviceID -t defaultTemplateFilePath targetAppName \
   -e UIASCRIPT scriptFilePath -e UIARESULTSPATH resultsFolderPath
defaultTemplateFilePath

The full pathname of the default template:

/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/Library/Instruments/PlugIns/
AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate
scriptFilePath

The file-system location of your test script.

resultsFolderPath

The file-system location of the directory to hold the results of your test script.


You’ve learned that Instruments has built-in instruments that provide a great deal of information about the inner workings of your app. Sometimes, though, you may want to tailor the information being gathered more closely to your own code. For example, instead of gathering data every time a function is called, you might want to set conditions on when data is gathered. Alternatively, you might want to dig deeper into your own code than the built-in instruments allow. For these situations, Instruments lets you create custom instruments. Whenever possible, it is recommended that you use an existing instrument instead of creating a new instrument. Creating custom instruments is an advanced feature.

The following sections show you how to create a custom instrument and how to use that instrument both with the Instruments app and with the dtrace command-line tool.

About Custom Instruments

Custom instruments use DTrace for their implementation. DTrace is a dynamic tracing facility originally created by Sun and ported to OS X. Because DTrace taps into the operating system kernel, you have access to low-level information about the kernel itself and the user processes running on your computer. Many of the built-in instruments are already based on DTrace. And even though DTrace is itself a very powerful and complex tool, Instruments provides a simple interface that gives you access to the power of DTrace without the complexity.

DTrace has not been ported to iOS, so it is not possible to create custom instruments for devices running iOS.

Important: Although the custom instrument builder simplifies the process of creating DTrace probes, you should still be familiar with DTrace and how it works before creating new instruments. Many of the more powerful debugging and data gathering actions require you to write DTrace scripts. To learn about DTrace and the D scripting language, see the Solaris Dynamic Tracing Guide, available from the Oracle Technology Network. For information about the dtrace command-line tool, see the dtrace man page.

Note: Several Apple apps—namely, iTunes, DVD Player, and apps that use QuickTime—prevent the collection of data through DTrace (either temporarily or permanently) in order to protect sensitive and copyrighted data. Therefore, you should not run those apps when performing systemwide data collection.

Custom instruments are built using DTrace probes. A probe is like a sensor that you place in your code. It corresponds to a location or event, such as a function entry point, to which DTrace can bind. When the function executes or the event is generated, the associated probe fires and DTrace runs whatever actions are associated with the probe. Most DTrace actions simply collect data about the operating system and user app behavior at that moment. It is possible, however, to run custom scripts as part of an action. Scripts let you use the features of DTrace to fine tune the data you gather.

Probes fire each time they are encountered, but the action associated with the probe need not be run every time the probe fires. A predicate is a conditional statement that allows you to restrict when the probe’s action is run. For example, you can restrict a probe to a specific process or user, or you can run an action when a specific condition in your instrument is true. By default, probes do not have any predicates, meaning that the associated action runs every time the probe fires. You can add any number of predicates to a probe, however, and link them together using AND and OR operators to create complex decision trees.

A custom instrument consists of the following blocks:

  • A description block, containing the name, category, and description of the instrument

  • One or more probes, each containing its associated actions and predicates

  • A DATA declaration area, for declaring global variables shared by all probes

  • A BEGIN script, which initializes any global variables and performs any startup tasks required by the instrument

  • An END script, which performs any final cleanup actions

All instruments must have at least one probe with its associated actions. Similarly, all instruments should have an appropriate name and description to identify them to Instruments users. Instruments displays your instrument’s descriptive information in the library window. Providing good information makes it easier to remember what the instrument does and how it should be used.

Probes are not required to have global DATA or BEGIN and END scripts. Those elements are used in advanced instrument design when you want to share data among probes or provide some sort of initial configuration for your instrument. The creation of DATA, BEGIN, and END blocks is described in Tips for Writing Custom Scripts.

Creating a Custom Instrument

To create a custom DTrace instrument, select Instrument > Build New Instrument. This command displays the instrument configuration sheet, shown in Figure 12-1. You use this sheet to specify your instrument information, including any probes and custom scripts.

Figure 12-1  The instrument configuration sheet The instrument configuration sheet

At a minimum, you should provide the following information for every instrument you create:

  • Name. The name associated with your custom instrument in the library.

  • Category. The category in which your instrument appears in the library. You can specify the name of an existing category—such as Memory—or create your own.

  • Description. The instrument description, used in both the library window and in the instrument’s help tag.

  • Probe provider. The probe type and the details of when it should fire. Typically, this involves specifying the method or function to which the probe applies. For more information, see Specifying the Probe Provider.

  • Probe action. The data to record or the script to execute when your probe fires; see Adding Actions to a Probe.

An instrument should contain at least one probe and may contain more than one. The probe definition consists of the provider information, predicate information, and action. All probes must specify the provider information at a minimum, and nearly all probes define some sort of action. The predicate portion of a probe definition is optional but can be a very useful tool for focusing your instrument on the correct data.

Adding and Deleting Probes

Every new instrument comes with one probe that you can configure. To add more probes, click the Add button (+) at the bottom of the instrument configuration dialog.

To remove a probe from your instrument, click the probe to select it and click the Remove button (-) at the bottom of the instrument configuration dialog.

../Art/instruments_custom_instrument_pane_add_remove_probes_2x.png

When adding probes, it is a good idea to provide a descriptive name for the probe. By default, Instruments enumerates probes with names like Probe 1 and Probe 2.

Specifying the Probe Provider

To specify the location point or event that triggers a probe, you must associate the appropriate provider with the probe. Providers are kernel modules that act as agents for DTrace, providing the instrumentation necessary to create probes. You do not need to know how providers operate to create an instrument, but you do need to know the basic capabilities of each provider. Table 12-1 lists the providers that are supported by the Instruments app and available for use in your custom instruments. The Provider column lists the name displayed in the instrument configuration sheet, and the DTrace provider column lists the actual name of the provider used in the corresponding DTrace script.

Table 12-1  DTrace providers

Provider

DTrace provider

Description

User Process

pid

The probe fires on entry (or return) of the specified function in your code. You must provide the function name and the name of the library that contains it.

Objective-C

objc

The probe fires on entry (or return) of the specified Objective-C method. You must provide the method name and the class to which it belongs.

System Call

syscall

The probe fires on entry (or return) of the specified system library function.

DTrace

DTrace

The probe fires when DTrace itself enters a BEGIN, END, or ERROR block.

Kernel Function Boundaries

fbt

The probe fires on entry (or return) of the specified kernel function in your code. You must provide the kernel function name and the name of the library that contains it.

Mach

mach_trap

The probe fires on entry (or return) of the specified Mach library function.

Profile

profile

The probe fires regularly at the specified time interval on each core of the machine. Profile probes can fire with a granularity that ranges from microseconds to days.

Tick

tick

The probe fires at periodic intervals on one core of the machine. Tick probes can fire with a granularity that ranges from microseconds to days. You might use this provider to perform periodic tasks that are not required to be on a particular core.

I/O

io

The probe fires at the start of the specified kernel routine. For a list of functions monitored by this probe, use the dtrace -l command from Terminal to get a list of probe points. You can then search this list for probes monitored by the io module.

Kernel Process

proc

The probe fires on the initiation of one of several kernel-level routines. For a list of functions monitored by this probe, use the dtrace -l command from Terminal to get a list of probe points. You can then search this list for functions monitored by the proc module.

User-Level Synchronization

plockstat

The probe fires at one of several synchronization points. You can use this provider to monitor mutex and read-write lock events.

CPU Scheduling

sched

The probe fires when CPU scheduling events occur.

Core Data

CoreData

The probe fires at one of several Core Data–specific events. For a list of methods monitored by this probe, use the dtrace -l command from Terminal to get a list of probe points. You can then search this list for methods monitored by the CoreData module.

After selecting the provider for your probe, you need to specify the information needed by the probe. For example, for some function-level probes, providers may need function or method names, along with your code module or else the class containing your module. Other providers may only need you to select appropriate events from a pop-up menu.

After you have configured a probe, you can proceed to add additional predicates to it (to determine when it should fire) or you can go ahead and define the action for that probe.

Adding Predicates to a Probe

Predicates give you control over when a probe’s action is executed by Instruments. You can use predicates to prevent Instruments from gathering data when you don’t want it or think the data might be erroneous. For example, if your code exhibits unusual behavior only when the stack reaches a certain depth, you can use a predicate to specify the minimum target stack depth. Every time a probe fires, Instruments evaluates the associated predicates. Only if they evaluate to true does DTrace perform the associated actions.

bullet
To add a predicate to a probe
Figure 12-2  Configuring a probe Adding a predicate

You can add subsequent predicates using the Add buttons (+) of either the probe or the predicate. To remove a predicate, click the Remove button (-) next to the predicate.

Instruments evaluates predicates from top to bottom in the order in which they appear. To rearrange predicates, click the predicate’s row and drag it to a new location in the table. You can link predicates using AND and OR operators, but you cannot group them to create nested condition blocks. Instead, order your predicates carefully to ensure that all of the appropriate conditions are checked.

Use the first pop-up menu in a predicate row to choose the data to inspect as part of the condition. Table 12-2 lists the standard variables defined by DTrace that you can use in your predicates or script code. The Variable column lists the name as it appears in the instrument configuration panel, and the “DTrace variable” column lists the actual name of the variable used in corresponding DTrace scripts. In addition to testing the standard variables, you can test against custom variables and constants from your script code by specifying the Custom variable type in the predicate field.

Table 12-2  DTrace variables

Variable

DTrace variable

Description

Caller

caller

The value of the current thread’s program counter just before entering the probe. This variable contains an integer value.

Chip

chip

The identifier for the physical chip executing the probe. This is a 0-based integer indicating the index of the current core. For example, a four-core machine has cores 0 through 3.

CPU

cpu

The identifier for the CPU executing the probe. This is a 0-based integer indicating the index of the current core. For example, a four-core machine has cores 0 through 3.

Current Working Directory

cwd

The current working directory of the current process. This variable contains a string value.

Last Error #

errno

The error value returned by the last system call made on the current thread. This variable contains an integer value.

Executable

execname

The name that was passed to exec to execute the current process. This variable contains a string value.

User ID

uid

The real user ID of the current process. This variable contains an integer value.

Group ID

gid

The real group ID of the current process. This variable contains an integer value.

Process ID

pid

The process ID of the current process. This variable contains an integer value.

Parent ID

ppid

The process ID of the parent process. This variable contains an integer value.

Thread ID

tid

The thread ID of the current thread. This is the same value returned by the pthread_self function.

Interrupt Priority Level

ipl

The interrupt priority level on the current CPU at the time the probe fired. This variable contains an unsigned integer value.

Function

probefunc

The function name part of the probe’s description. This variable contains a string value.

Module

probemod

The module name part of the probe’s description. This variable contains a string value.

Name

probename

The name portion of the probe’s description. This variable contains a string value.

Provider

probeprov

The provider name part of the probe’s description. This variable contains a string value.

Root Directory

root

The root directory of the process. This variable contains a string value.

Stack Depth

stackdepth

The stack frame depth of the current thread at the time the thread fired. This variable contains an unsigned integer value.

Relative Timestamp

timestamp

The current value of the system’s timestamp counter, in nanoseconds. Because this counter increments from an arbitrary point in the past, use it to calculate only relative time differences. This variable contains an unsigned 64-bit integer value.

Virtual Timestamp

vtimestamp

The amount of time the current thread has been running, in nanoseconds. This value does not include time spent in DTrace predicates and actions. This variable contains an unsigned 64-bit integer value.

Timestamp

walltimestamp/1000

The current number of nanoseconds that have elapsed since 00:00 Universal coordinated Time, January 1, 1970. This variable contains an unsigned 64-bit integer value.

arg0through arg9

arg0 through arg9

The first 10 arguments to the probe, represented as raw 64-bit integers. If fewer than ten arguments were passed to the probe, the remaining variables contain the value 0.

Custom

The name of your variable

Use this option to specify a variable or constant from one of your scripts.

In addition to specifying the condition variable, you must specify the comparison operator and the target value.

Adding Actions to a Probe

When a probe point defined by your instrument is hit and the probe’s predicate conditions evaluate to true, DTrace runs the actions associated with the probe. You use your probe’s actions to gather data or to perform some additional processing. For example, if your probe monitors a specific function or method, you could have it return the caller of that function and any stack trace information to Instruments. If you wanted a slightly more advanced action, you could use a script variable to track the number of times the function was called and report that information as well. And if you wanted an even more advanced action, you could write a script that uses kernel-level DTrace functions to determine the status of a lock used by your function. In this latter case, your script code might also return the current owner of the lock (if there is one) to help you determine the interactions among your code’s different threads.

Figure 12-3 shows the portion of the instrument configuration sheet where you specify your probe’s actions. The script portion simply contains a text field for you to type in your script code. (Instruments does not validate your code before passing it to DTrace, so check your code carefully.) The bottom section contains controls for specifying the data you want DTrace to return to Instruments. You can use the pop-up menus to configure the built-in DTrace variables you want to return. You can also select Custom from this pop-up menu and return one of your script variables.

Figure 12-3  Configuring a probe’s action Configuring a probe’s action

When you configure your instrument to return a custom variable, Instruments asks you to provide the following information:

  • The script variable containing the data

  • The name to apply to the variable in your instrument interface

  • The type of the variable

Any data your probe returned to Instruments is collected and displayed in your instrument’s detail pane. The detail pane displays all data variables regardless of type. If stack trace information is available for a specific probe, Instruments displays that information in your instrument’s Extended Detail inspector. In addition, Instruments automatically looks for integer data types returned by your instrument and adds those types to the list of statistics your instrument can display in the track pane.

Because DTrace scripts run in kernel space and the Instruments app runs in user space, if you want to return the value of a custom pointer-based script variable to Instruments, you must create a buffer to hold the variable’s data. The simplest way to create a buffer is to use the copyin or copyinstr subroutines found in DTrace. The copyinstr subroutine takes a pointer to a C string and returns the contents of the string in a form you can return to Instruments. Similarly, the copyin subroutine takes a pointer and size value and returns a buffer to the data, which you can later format into a string using the stringof keyword. Both of these subroutines are part of the DTrace environment and can be used from any part of your probe’s action definition. For example, to return the string from a C-style string pointer, you simply wrap the variable name with the copyinstr subroutine, as shown in Figure 12-4.

Figure 12-4  Returning a string pointer Returning a string pointer

Important: Instruments automatically wraps built-in variables (such as the arg0 through arg9 function arguments) with a call to copyinstr if the variable type is set to string. Instruments does not automatically wrap script’s custom variables, however. You are responsible for ensuring that the data in a custom variable actually matches the type specified for that variable.

For a list of the built-in variables supported by Instruments, see Table 12-2. For more information on scripts and script variables, see Tips for Writing Custom Scripts. For more information on DTrace subroutines, including the copyin and copyinstr subroutines, see the Solaris Dynamic Tracing Guide, available from the Oracle Technology Network.

Tips for Writing Custom Scripts

You write DTrace scripts using the D scripting language, whose syntax is derived from a large subset of the C programming language. The D language combines the programming constructs of the C language with a special set of functions and variables to help you trace information in your app.

The following sections describe some of the common ways to use scripts in your custom instruments. These sections do not provide a comprehensive overview of the D language or the process for writing DTrace scripts. For information about scripting and the D language, see the Solaris Dynamic Tracing Guide, available from the Oracle Technology Network.

Writing BEGIN and END Scripts

If you want to do more than return the information in DTrace’s built-in variables to Instruments whenever your action fires, you need to write custom scripts. Scripts interact directly with DTrace at the kernel level, providing access to low-level information about the kernel and the active process. Most instruments use scripts to gather information not readily available from DTrace. You can also use scripts to manipulate raw data before returning it to Instruments. For example, you can use a script to normalize a data value to a specific range if you want to make it easier to compare that value graphically with other values in your instrument’s track pane.

In Instruments, the custom instrument configuration sheet provides several areas where you can write DTrace scripts:

  • The DATA section contains definitions of any global variables you want to use in your instrument.

  • The BEGIN section contains any initialization code for your instrument.

  • Each probe contains script code as part of its action.

  • The END section contains any clean up code for your instrument.

All script sections are optional. You are not required to have initialization scripts or cleanup scripts if your instrument does not need them. If your instrument defines global variables in its DATA section, however, it is recommended that you also provide an initialization script to set those variables to a known value. The D language does not allow you to assign values inline with your global variable declarations, so you must put those assignments in your BEGIN section. For example, a simple DATA section might consist of a single variable declaration, such as the following:

int myVariable;

The corresponding BEGIN section would then contain the following code to initialize that variable:

myVariable = 0;

If your corresponding probe actions change the value of myVariable, you might use the END section of your probe to format and print out the final value of the variable.

Most of your script code is likely to be associated with individual probes. Each probe can have a script associated with its action. When it comes time to execute a probe’s action, DTrace runs your script code first and then returns any requested data back to Instruments. Because passing data back to Instruments involves copying data from the kernel space back to the Instruments app space, you should always pass data back to Instruments by configuring the appropriate entries in the “Record the following data:“ section of the instrument configuration sheet. Variables returned manually from your script code may not be returned correctly to Instruments.

Accessing Kernel Data from Custom Scripts

Because DTrace scripts execute inside the system kernel, they have access to kernel symbols. If you want to look at global kernel variables and data structures from your custom instruments, you can do so in your DTrace scripts. To access a kernel variable, precede the name of the variable with the backquote character (`). The backquote character tells DTrace to look for the specified variable outside of the current script.

Listing 12-1 shows a sample action script that retrieves the current load information from the avenrun kernel variable and uses that variable to calculate a one-minute average load of the system. If you were to create a probe using the Profile provider, you could have this script gather load data periodically and then graph that information in Instruments.

Listing 12-1  Accessing kernel variables from a DTrace script

this->load1a = `avenrun[0]/1000;
this->load1b = ((`avenrun[0] % 1000) * 100) / 1000;
this->load1 = (100 * this->load1a) + this->load1b;

Scoping Variables Appropriately

DTrace scripts have an essentially flat structure, due to a lack of flow control statements and the desire to keep probe execution time to a minimum. That said, you can scope the variables in DTrace scripts to different levels depending on your need. Table 12-3 lists the scoping levels for variables and the syntax for using variables at each level.

Table 12-3  Variable scope in DTrace scripts

Scope

Syntax example

Description

Global

myGlobal = 1;

Global variables are identified simply using the variable name. All probe actions on all system threads have access to variables in this space.

Thread

self->myThreadVar = 1;

Thread-local variables are dereferenced from the self keyword. All probe actions running on the same thread have access to variables in this space. You might use this scope to collect data over the course of several runs of a probe’s action on the current thread.

Probe

this->myLocalVar = 1;

Probe-local variables are dereferenced using the this keyword. Only the current running probe has access to variables in this space. Typically, you use this scope to define temporary variables that you want the kernel to clean up when the current action ends.

Finding Script Errors

If the script code for one of your custom instruments contains an error, Instruments displays an error message in the track pane when DTrace compiles the script. Instruments reports the error after you press the Record button in your trace document but before tracing actually begins. Inside the error message bubble is an Edit button. Clicking this button opens the instrument configuration sheet, which now identifies the probe with the error.

Exporting DTrace Scripts

Although Instruments provides a convenient interface for gathering trace data, there are still times when it is more convenient to gather trace data directly using DTrace. If you are a system administrator or are writing automated test scripts, for example, you might prefer to use the DTrace command-line interface to launch a process and gather the data. Using the command-line tool requires you to write your own DTrace scripts, which can be time consuming and can lead to errors. If you already have a trace document with one or more DTrace-based instruments, you can use the Instruments app to generate a DTrace script that provides the same behavior as the instruments in your trace document.

Instruments supports exporting DTrace scripts only for documents where all of the instruments are based on DTrace. This means that your document can include custom instruments and a handful of the built-in instruments, such as the instruments in the File System and CoreData groups in the Library window.

bullet
To export a DTrace script

The DTrace Script Export command places the script commands for your instruments in a text file that you can then pass to the dtrace command-line tool using the -s option. For example, if you export a script named MyInstrumentsScript.d, run it from Terminal using the following command:

sudo dtrace -s MyInstrumentsScript.d

Note: You must have superuser privileges to run dtrace in most instances, which is why the sudo command is used to run dtrace in the preceding example.

Another advantage of exporting your scripts from Instruments (as opposed to writing them manually) is that after running the script, you can import the resulting data back into Instruments and review it there. Scripts exported from Instruments print a start marker (with the text dtrace_output_begin) at the beginning of the DTrace output. To gather the data, simply copy all of the DTrace output (including the start marker) from Terminal and paste it into a text file, or just redirect the output from the dtrace tool directly to a file. To import the data in Instruments, select the trace document from which you generated the original script and choose File > DTrace Data Import.

The Preferences window is accessed by selecting Instruments &gt; Preferences. It contains six tabs where you can customize Instruments to best suit your needs.

General Tab

Use the General tab (see The general preference pane in Instruments) to configure basic Instruments preferences including startup, keyboard shortcuts, and warnings options.

Table A-1  General tab

Option

Description

Always use deferred mode

Performs data analysis for all traces after data collection is complete.

Automatically time profile spinning applications

Automatically monitors for a spinning process while a trace is recorded. This can be a process other than the one being recorded. If detected, Instruments starts the Time Profiler instrument on the spinning process.

Suppress template chooser

Hides the template chooser when Instruments starts up and when a new trace document is created.

Save current run only

Saves only the current data collection run for each individual instrument.

Compress run data

Compresses each saved run into zip format.

Default document location

Specifies the location where new Instruments documents are created. By default, this is a temporary directory. Select Choose from this pop-up menu to use a different directory. Click Reset to use the temporary directory again.

Open Keyboard Shortcut Preferences

Opens the Keyboard > Shortcuts > Services pane in the System Preferences app, as shown in The Keyboard > Shortcuts > Services pane in System Preferences. From here, you can assign keyboard shortcuts to development services, such as a service that automatically opens an Xcode project in Instruments and profiles it with the System Trace template.

Reset “Don’t Ask Me” Warnings

Reenables dialog warnings you previously elected not to show. Instruments has several warning dialogs that you can disable by selecting the “Do not show this message again” checkbox in the dialog. To reenable all of these warning dialogs, click the Reset “Don’t Ask Me” Warnings button.

Figure A-1  The General preference pane in Instruments Figure A-2  The Keyboard > Shortcuts > Services pane in System Preferences

Display Tab

Use the Display tab (see The Display preference pane in Instruments) to configure track display options in a trace document.

Table A-2  Display tab

Option

Description

Enforce initial deck height

When selected, prevents custom deck height from being restored when an instrument document is re-loaded and uses the template’s default deck height. When deselected, saves and restores the current deck height.

Sort process lists by identifier

When selected, sorts all process lists, such as the attach menu, by their process ID. When deselected, sorts process lists alphabetically.

Always snap track to fit at end of run

Automatically scales the track in a trace document at the end of a run to fit all data in the window.

Figure A-3  The Display preference pane in Instruments

DTrace Tab

Use the DTrace tab (see The DTrace preference pane in Instruments) to configure how DTrace-based instruments act. DTrace instruments use dynamic tracing to access low-level kernel operations and user processes running on your device.

Table A-3  DTrace tab

Option

Description

Buffer size

Sets the size of the DTrace kernel buffer (in megabytes). The default is 25 MB.

Max backtrace depth

Sets the maximum stack depth that is captured when using a DTrace instrument. The default is 128.

Permit zero match probes

Prevents an error when a specified probe is not found.

Preserve intermediate files

Prevents Instruments from removing intermediate DTrace data output files from the disk.

Flag runtime messages

Adds flags to the timeline for DTrace runtime status and error messages encountered during a recording.

Figure A-4  The DTrace preference pane in Instruments

Background Profiling Tab

Use the Background Profiling tab (see The Background Profiling preference pane in Instruments) to configure how the Time Profiler instrument behaves when operating in the background. Since background time profiling is a low impact CPU sampler, you can activate it without Instruments running. To do this, enable the time profiling services in the Keyboard > Shortcuts > Services pane in System Preferences. You can even assign keyboard shortcuts to these services, as shown in The Keyboard > Shortcuts > Services pane in System Preferences. When Instruments is running, time profiling can be started from the Instruments Dock menu (Control+Click on the Instruments icon in the dock to display this menu)

Table A-4  Background Profiling tab

Option

Description

Sampling interval

Specifies how often a sample is taken. Type a numeric value in the field. Choose microsecond (μs), millisecond (ms), or second (sec) from the pop-up menu. Defaults to 1 millisecond.

Sampling duration

Sets the length of a sample trace. Type a numeric value in the field. Choose microsecond (μs), millisecond (ms), or second (sec) from the pop-up menu. Defaults to 5 seconds.

Figure A-5  The Background Profiling preference pane in Instruments

CPUs Tab

Use the CPUs tab (see The CPUs preference pane in Instruments) to configure Instruments for the CPU configuration of your device.

Table A-5  CPUs tab

Option

Description

Active Processor Cores

Adjusts how many cores of your system are active. Only active cores are scheduled to perform any operations when profiling. Use the slider to set the number of active cores equal to the number of cores on the device that you expect your application to run on. Changes to this preference persist until you change it again, or until your system is put to sleep or restarted.

Hardware Multi-Threading

Allows CPU cores to utilize multiple execution units. When disabled, there is only one active execution unit per processor core.

Figure A-6  The CPUs preference pane in Instruments

dSYMs and Paths Tab

Use the “dSYMs and Paths” tab (see The dSYMs and Paths preference pane in Instruments) to set global search paths for Instruments.

Table A-6  “dSYMs and Paths tab”

Option

Description

+

Adds a search location. Opens the Open Directory dialog. Navigate to the desired directory and click Open. Directories indexed by Spotlight are already searched, so you only need to add search paths for directories that aren’t indexed.

_

Removes a selected search location. Select a path and click the minus (-) button to remove the path.

dSYM Download Script

Provides the option of running a custom script to locate and access the necessary dSYM files. This option is provided for use by large developers with distributed code databases; it is not needed by the majority of developers.

When a run ends

Enabled only if a dSYM download script is specified. Configures how the download script is automatically used after a recording has finished. Options include “Don’t Download dSYMs,” “Download App dSYMs,” “Download App and User Framework dSYMs,” and “Download All dSYMs.” The default is “Don’t Download dSYMs.”

Figure A-7  The “dSYMs and Paths” preference pane in Instruments

Keyboard shortcuts provide an easy way for experienced users to perform actions without a mouse click. This appendix provides a list of keyboard shortcuts provided by the Instruments app.

Instruments Menu

Here are keyboard shortcuts for the Instruments menu:

Table B-1  Instruments menu keyboard shortcuts

Keys

Commands

Command-Comma (,)

Preferences

Command-H

Hide Instruments

Command-Option-H

Hide Others

Command-Q

Quit Instruments

File Menu

Here are keyboard shortcuts for the File menu:

Table B-2  File menu keyboard shortcuts

Keys

Commands

Command-N

New

Command-O

Open

Command-W

Close

Command-S

Save

Command-Shift-S

Save as

Command-R

Record Trace

Command-Shift-R

Pause Trace

Command-Option-R

Record Options

Edit Menu

Here are the keyboard shortcuts for the Edit menu:

Table B-3  Edit menu keyboard shortcuts

Keys

Commands

Command-Z

Undo

Command-Shift-Z

Redo

Command-X

Cut

Command-C

Copy

Command-Shift-C

Deep Copy

Command-V

Paste

Shift-Option-Command-V

Paste and Match Style

Command-A

Select All

Command-F

Find

Command-G

Find Next

Command-Shift-G

Find Previous

Command-E

Use Selection for Find

Command-J

Jump to Selection

Command-Colon (:)

Show Spelling and Grammar

Command-Semicolon (;)

Check Spelling

Command-Down Arrow

Add Flag

Command-Up Arrow

Remove Flag

Command-Control-Space

Special Characters

View Menu

Here are the keyboard shortcuts for the View menu:

Table B-4  View menu keyboard shortcuts

Keys

Commands

Command-D

Detail

Command-1

Show Record Settings

Command-2

Show Display Settings

Command-3

Show Extended Detail

Command-Control-F

Full Screen

Command-Right Arrow

Next Flag

Command-Left Arrow

Previous Flag

Command-Dash (-)

Decrease Deck Size

Command-Plus (+)

Increase Deck Size

Command-Control-Z

Snap Track To Fit

Command-Less Than (<)

Set Inspection Range End

Command-Greater Than (>)

Set Inspection Range End

Command-Period (.)

Clear Inspection Range

Instrument Menu

Here are the keyboard shortcuts for the Instrument menu:

Table B-5  Instrument menu keyboard shortcuts

Keys

Commands

Command-B

Build New Instrument

Command-T

Trace Symbol

Command-Quote (")

Previous Run

Command-Apostrophe (')

Next Run

Window Menu

Here are the keyboard shortcuts for the Window menu:

Table B-6  Window menu keyboard shortcuts

Keys

Commands

Command-M

Minimize

Control-Z

Zoom

Command-L

Library

Command-Shift-T

Manage Flags


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值