Ad Hoc Software Testing

 

 

By  Chris Agruss & Bob Johnson

Abstract

An ad hoc test can be described as an exploratory case that you expect to run only once, unless it happens to uncover a defect. As such, ad hoc testing is sometimes viewed as a wasteful approach to software testing. On the other hand, many skilled software testers find  the  exploratory  approach  to  be  one  of  the  best  techniques  for  uncovering  certain types of defects. In this paper we'll put ad hoc tests into perspective with other forms of exploratory  testing.  Along  the  way,  we'll  also  develop  a  comparison  between  jazz improvisation in the musical world, and improvisational software testing.

Introduction

You might wonder why we're placing such emphasis on the idea that  ad  hoc  tests  are fundamentally one-off cases. To begin with, the one-off characteristic distinguishes ad hoc tests cleanly from the formal testing paradigms that place the emphasis on re-executing tests, such as acceptance testing and regression testing. The assertion that one-off tests are valuable flies in the face of conventional wisdom, which places such a premium on repeatability and test automation. Much of the software industry overemphasizes the design and rerunning of regression tests, at the risk of failing to run enough new tests. It is sobering to ask why we are spending so much

time rerunning tests that have already passed muster, when we could be running one of the many worth while tests that have never been tried even once?

 

The Metaphorical Bridge

The spectrum described above can also be pictured as a bridge between formal and less formal testing paradigms. This metaphor has the advantage of suggesting motion across the bridge in both directions. For example, you can bring regression tests across the bridge into the ad hoc domain for more extensive testing. Conversely, you can bring ad hoc tests that you choose to rerun across the bridge in the other direction, for integration with the regression suite. We'll expand this bridge metaphor more fully in the section on Improvisational Testing, below.

 

                     A  Bridge   Between  paradigms

Two - way

Bridge

                                                                                                            Less Foraml

Exploratory

 

Ad hoc

 

Improvisational

 

 Alpha

 

 Beta

 

Bug hunts

 

etc

         Formal Tests                                                                      Tests

Regression

 

System

 

Acceptance

 

Congfiguration

 

Performance

 

Release

 

Etc

 

 

 

 

 

 

 

 

 

 

 

 


When is Ad Hoc Testing NOT Appropriate?

We've seen attempts at using ad hoc methods as an adjunct to build acceptance testing, with mixed results. Our opinion is that ad hoc methods are more appropriate elsewhere. If a testing task such as build acceptance needs to be done repeatedly, it should be documented, and then automated. We like to include the build regression tests in the make file, so that there is no confusion about running these tests. This has also encouraged developers to add more tests.

Neither do release tests lend themselves well to ad hoc methods. Release protocols need to be documented and executed meticulously, to assure that every critical item is checked off the list. These tests need to be complete and highly organized. If release testing fails, the product will need to be fixed, and the tests rerun. Often, this is a driving force behind having release tests automated. Moreover, these tests usually need to be run quickly, whereas ad hoc testing is typically a slower, more labor-intensive form of testing. Crash parties and bug hunts may be good team building or marketing events, but these events rarely result in the sort of ad hoc testing that we're advocating. It might be exciting to have your CEO sit down and try to break the program, but it is also very risky, unless the program is thoroughly tested beforehand. We have seen such events go badly, particularly if the CEO breaks the program and everybody goes back to work feeling demoralized.

 

Strengths of Ad Hoc Testing

One of the best uses of ad hoc testing is for discovery. Reading the requirements or specifications (if they exist) rarely gives you a good sense of how a program actually behaves. Even the user documentation may not capture the "look and feel"of a program. Ad hoc testing can find holes in your test strategy, and can expose relationships between subsystems that would otherwise not be apparent. In this way, it serves as a tool for checking the completeness of your testing. Missing cases can be found and added to your testing arsenal.

Finding new tests in this way can also be a sign that you should perform root cause analysis. Ask yourself or your test team, "What other tests of this class should we be running?"  Defects found while doing ad hoc testing are often examples of entire classes of forgotten test cases. Another use for ad hoc testing is to determine the priorities for your other testing activities. In our example program, Panorama may allow the user to sort photographs that are being displayed. If ad hoc testing shows this to work well, the formal testing of this feature might be deferred until the problematic areas are completed. On the other hand, if ad hoc testing of this sorting photograph feature uncovers problems, then the formal testing might receive a higher priority.

We've found that ad hoc testing can also be used effectively to increase your code coverage. Adding new tests to formal test designs often requires a lot of effort in producing the designs, implementing the tests, and finally determining the improved coverage. A more streamlined approach involves using iterative ad hoc tests to determine quickly if you are adding to your coverage. If it adds the coverage you're seeking, then you'll probably want to add these new cases to your archives; if not, then you can discard the cases as one-off tests.

There are two general classes of functions in the code that forms most programs: functions that support a specific feature, and basic low-level housekeeping functions. It is for these housekeeping functions that ad hoc testing is particularly valuable, because many of these functions won't make it into the specifications or user documentation. The testing team may well be unaware of the code's existence. We'll suggest some methods for exploiting this aspect of ad hoc testing in the Techniques section below.

 

Techniques for Ad Hoc Testing

So, how do you do it? We are reminded of the Zen aphorism: those that say do not know, and those that know

do not say. This is a challenging process to describe, because its main virtues are a lack of rigid structure, and the freedom to try alternative pathways. Much of what experienced software testers do is highly intuitive, rather than strictly logical. However, here are a few techniques that we hope will help make your ad hoc testing more effective.

To begin with, target areas that are not already covered very thoroughly by your test designs. In our experience, test designs are written to cover specific features in the software, with relatively little attention paid to the potential interaction between features. Much of this interaction goes on at the subsystem level (e.g. the graphical subsystem), because it supports multiple features. Picture some potential interactions such as these that could go awry in your program, and then set out to test your theory using an ad hoc approach. In our Panorama example, this might be the interaction between the "stitch"feature and the "import"photograph feature. Can you stitch and then add more pictures? What are all the possibilities for interaction between just these two features?

Before embarking on your ad hoc adventure, take out paper and pencil. On the paper, write down what you're most interested in learning about the program during this session. Be specific. Note exactly what you plan to achieve, and how long you are going to spend doing the work. Then make a short list of what might go wrong and how you would be able to detect such problems.

Next, start the program, and try to exercise the features that you are currently concerned about. Remember that you want to use this time to better understand the program, so side trips to explore both breadth and depth of the program are very much in order. As an example, suppose you want to determine how thoroughly you need to test Panorama's garbage collection (random access memory cleanup and reuse). From the documentation given, you will probably be uncertain if any garbage collection testing is required. Although many programs use some form of garbage collection, this will seldom be mentioned in any pecification. Systematic testing of garbage collection is very time consuming. However, with one day's ad hoc testing of most programs, an experienced tester can determine if the program's garbage collection is "okay," has "some problems,"or is "seriously broken." With this information the test manager can determine how much effort to spend in testing this further. We've even seen the development manager withdraw features for review, and call for a rewrite of the code as a result of this knowledge, saving both testing and debugging time. There are hundreds of these low-level housekeeping functions, often re-implemented for each new program, and prone to design and programmer error. Some other areas to consider include:

  sorting

  searching

  two-phase commit

  hashing

  saving to disk (including temporary and cached files)

  menu navigation

  memory management (beyond garbage collection)

  parsing

A good ad hoc tester needs to understand the design goals and requirements for these low-level functions. What choices did the development team make, and what were the weaknesses of those choices?  As testers, we are less concerned with the correct choices made during development. Ad hoc testing can be done as black box testing. However, this means you must check for all the major design patterns that might have been used. Working with the development team to understand their choices moves you closer to clear or white box testing. This allows you to narrow the testing to many fewer cases. Read defect reports from many projects, not just from your project. Your defect database doesn't necessarily tell you what kind of mistakes the developers are making; it tells you what kinds of mistakes you are finding. You want to find new types of problems. Expand your horizon. Read about problems, defects, and weaknesses in the application's environment. Sources of such problems include the operating system, the language being used, the specific compiler, the libraries used, and the APIs being called.

Learn to use debuggers, profilers, and task monitors while running one-off tests. In many cases you won't see a visible error in the execution of the program, yet one of the tools will flag a process that is out of control.

 

Where does ad hoc testing fit in the testing cycle?

 

Ad hoc testing finds a place during the entire testing cycle. Early in the project, ad hoc testing provides breadth to testers'understanding of your program, thus aiding in discovery. In the middle of a project, the data obtained helps set priorities and schedules. As a project nears the ship date, ad hoc testing can be used to examine defect fixes more rigorously, as described earlier.

 

Can Ad Hoc Testing Be Automated?

Regression testing is often better suited than ad hoc testing to automated methods. However, automated testing methods are available that provide random variation in how tests are executed. For example, see Noel Nyman's excellent paper listed in the reference section: GUI Application Testing With Dumb Monkeys. Yet, a key strength of ad hoc testing is the ability of the tester to do unexpected operations, and then to make a value judgement about the correctness of the results. This last aspect of ad hoc testing, verification of the results, is exceedingly difficult to automate. Partial automation is another way in which automated testing tools and methods can be used to augment ad hoc testing in particular, and exploratory testing in general. Partial automation implies that some of the test is automated, and some of it will rely upon a human. For example, your ad hoc testing may require a great deal of set up and configuration, which could be covered by an automated routine. Another commonly used scenario involves designing an automated routine to run a variety of tests, but using a human being to verify whether the tests passed or failed. For logistical reasons, this last method is more feasible for low volume tests than it is for high volume approaches. There are yet other ways in which automation tools can help the manual ad hoc tester. Ordinarily, we advise against using record/playback tools for automating test cases, but a good recording tool may help in capturing the steps in a complex series of ad hoc operations. This is especially important if you uncover a defect, because it

will bring you that much closer to identifying the steps to reproduce it. Some testers have told us that they routinely turn on their recorders when they begin ad hoc testing, specifically for this purpose. Alternatively, you could opt to place tracing code in the product. When isolating a defect, tracing code may provide more useful info than a captured script will. Tracing can also help you understand how the lower level code, your target for subsystem testing, is being used.

Future Directions

Some of the most interesting new work in this area is coming from Cem Kaner, a Professor at the Florida Institute of Technology. From looking over his course notes, its clear that Cem has collected a rich set of techniques that can help guide the exploratory tester.

Brian Marick has described a somewhat different form of improvisational testing in his article Interactions and Improvisational Testing -http://www.testingcraft.com/exploiting-interactions.html. In that article, he describes

Improvisational Testing as "testing that's planned to actively encourage imaginative leaps."This appears to be another very fruitful dimension to improvisational testing that warrants further exploration.

 

 

References

Bach, James, General Functionality and Stability Test Procedure, http://www.testingcraft.com/bach- exploratory-procedure.pdf.

Kaner, Cem, Jack Falk, and Hung Nguyen, Testing Computer Software, 2    Edition, John Wiley & Sons,

1999.

Knuth, Donald, The Art of Computer Programming, Volume 1, Fundamental

Algorithms, Addison Wesley Publishing Company, 1973.

Knuth, Donald, The Art of Computer Programming, Volume 3, Sorting and

Searching, Addison Wesley Publishing Company, 1973.

Marick, Brian, The Craft of Software Testing: Subsystem Testing Including Object-Based and Object-

Oriented Testing, Prentice Hall, 1994.

Nyman, Noel, GUI Application Testing With Dumb Monkeys, Proceedings of STAR West, 1998. Webster, Bruce, Pitfalls of Object-Oriented Development, M&T Books, 1995.

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
基于C++&OPENCV 的全景图像拼接 C++是一种广泛使用的编程语言,它是由Bjarne Stroustrup于1979年在新泽西州美利山贝尔实验室开始设计开发的。C++是C语言的扩展,旨在提供更强大的编程能力,包括面向对象编程和泛型编程的支持。C++支持数据封装、继承和多态等面向对象编程的特性和泛型编程的模板,以及丰富的标准库,提供了大量的数据结构和算法,极大地提高了开发效率。12 C++是一种静态类型的、编译式的、通用的、大小写敏感的编程语言,它综合了高级语言和低级语言的特点。C++的语法与C语言非常相似,但增加了许多面向对象编程的特性,如类、对象、封装、继承和多态等。这使得C++既保持了C语言的低级特性,如直接访问硬件的能力,又提供了高级语言的特性,如数据封装和代码重用。13 C++的应用领域非常广泛,包括但不限于教育、系统开发、游戏开发、嵌入式系统、工业和商业应用、科研和高性能计算等领域。在教育领域,C++因其结构化和面向对象的特性,常被选为计算机科学和工程专业的入门编程语言。在系统开发领域,C++因其高效性和灵活性,经常被作为开发语言。游戏开发领域中,C++由于其高效性和广泛应用,在开发高性能游戏和游戏引擎中扮演着重要角色。在嵌入式系统领域,C++的高效和灵活性使其成为理想选择。此外,C++还广泛应用于桌面应用、Web浏览器、操作系统、编译器、媒体应用程序、数据库引擎、医疗工程和机器人等领域。16 学习C++的关键是理解其核心概念和编程风格,而不是过于深入技术细节。C++支持多种编程风格,每种风格都能有效地保证运行时间效率和空间效率。因此,无论是初学者还是经验丰富的程序员,都可以通过C++来设计和实现新系统或维护旧系统。3

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值