General principles


1. These statements are not ultimate truths. In automation planning, as in so many other endeavors, you must keep in mind what problem are you trying to solve, and what context are you trying to solve it in. (Consensus)



2. GUI test automation is a significant software development effort that requires architecture, standards, and discipline. The general principles that apply to software design and implementation apply to automation design and implementation. (Consensus)



3. For efficiency and maintainability, we need first to develop an automation structure that is invariant across feature changes; we should develop GUI-based automation content only as features stabilize. (Consensus)



4. Several of us had a sense of patterns of evolution of a companys automation efforts over time:


First generalization (7 yes, 1 no): In the absence of previous automation experience, most automation efforts evolve through:



a. Failure in capture /playback. It doesnt matter whether were capturing bits or widgets (object oriented capture/replay);



b. Failure in using individually programmed test cases. (Individuals code test cases on their own, without following common standards and without building shared libraries.)



c. Development of libraries that are maintained on an ongoing basis. The libraries might contain scripted test cases or data-driven tests.



Second generalization (10 yes, 1 no): Common automation initiatives failures are due to:



a. Using capture/playback as the principle means of creating test cases;



b. Using individually scripted tested cases (i.e. test cases that individuals code on their own, without following common standards and without building shared libraries);



c. Using poorly designed frameworks. This is a common problem.



5. Straight replay of test cases yields a low percentage of defects. (Consensus)



Once the program passes a test, it is unlikely to fail that test again in the future. This led to several statements (none cleanly voted on) that automated testing can be dangerous because it can gives us a falsely warm and fuzzy feeling that the program is not broken. Even if the program isnt broken today in the ways that it wasnt broken yesterday, there are probably many ways in which the program is broken. But you wont find them if you keep looking where the bugs arent.



6. Of the bugs found during an automated testing effort, 60%-80% are found during development of the tests. That is, unless you create and run new test cases under the automation tool right from the start, most bugs are found during manual testing. (Consensus)



Most of us do not usually use the automation tool to run test cases the first time. In the traditional paradigm, you run the test case manually first, then add it to the automation suite after the program passes the test. However, you can use the tool more efficiently if you have a way of determining whether the program passed or failed the test that doesnt depend on previously captured output. For example:



Run the same series of tests on the program across different operating system versions or configurations. You may have never tested the program under this particular environment, but you know how it should work.



Run a function equivalence test. [11] In this case, you run two programs in parallel and feed the same inputs to both. The program that you are testing passes the test if its results always match those of the comparison program.



Instrument the code under test so that it will generate a log entry any time that the program reaches an unexpected state, makes an unexpected state transition, manages memory, stack space, or other resources in an unexpected way, or does anything else that is an indicator of one of the types of errors under investigation. Use the test tool to randomly drive the program through a huge number of state transitions, logging the commands that it executes as it goes. The next day, the tester and the programmer trace through the log looking for bugs and the circumstances that triggered them. This is a simple example of a simulation. If you are working in collaboration with the application programming team, you can create tests like this that might use your tool more extensively and more effectively (in terms of finding new bugs per week) than you can achieve on your own, scripting new test cases by hand.



7. Automation can be much more successful when we collaborate with the programmers to develop hooks, interfaces, and debug output. (Consensus)



Many of these collaborative approaches dont rely on GUI-based automation tools, or they use these tools simply as convenient test drivers, without regard to what Ive been calling the basic GUI regression paradigm. It was fascinating going around the table on the first day of LAWST, hearing automation success stories. In most cases, the most dramatic successes involved collaboration with the programming team, and didnt involve traditional uses (if any use) of the GUI-based regression tools.



We will probably explore collaborative test design and development in a later meeting of LAWST.



8. Most code that is generated by a capture utility is unmaintainable and of no long term value. However, the capture utility can be useful when writing a test because it shows how the tool interprets a series of recent events. The script. created by the capture tool can give you useful ideas for writing your own code. (Consensus)



9. We don't use screen shots "at all" because they are a waste of time. (Actually, we mean that we hate using screen shots and use them only when necessary. We do find value in comparing small sections of the screen. And sometimes we have to compare screen shots, perhaps because were testing an owner-draw control. But to the extent possible, we should be comparing logical results, not bitmaps.) (Consensus)



10. Don't lose site of the testing in test automation. It is too easy to get trapped in writing scripts instead of looking for bugs. (Consensus)



想对作者说点什么? 我来说一句


wangshuqin0716 wangshuqin0716

2014-03-07 08:43:13