Agile-Friendly Test Automation Tools/Frameworks

Agile-Friendly Test Automation Tools/Frameworks

Several people have asked me recently why I’m not a fan of the traditional test automation tools for Agile projects. “Why should I use something likeFit or Fitnesse?” they ask. “We already have <insert Big Vendor Tool name here>. I don’t want to have to learn some other tool.”

Usually the people asking the question, at least in this particular way, are test automation specialists. They have spent much of their career becoming experts in a particular commercial tool. They know how to make their commercial tool of choice jump through hoops, sing, and make toast on command.

Then they find themselves in a newly Agile context struggling to use the same old tool to support a whole new way of working. They’re puzzled when people like me tell them that there are better alternatives for Agile teams.

So if you are trying to make a traditional, heavyweight, record-and-playback test automation solution work in an Agile context, or if you are trying to help those other people understand why their efforts are almost certainly doomed to fail, this post is for you.

Why Traditional, Record-and-Playback, Heavyweight, Commercial Test Automation Solutions Are Not Agile

Three key reasons:

  1. The test-last workflow encouraged by such tools is all wrong for Agile teams.
  2. The unmaintainable scripts created with such tools become an impediment to change.
  3. Such specialized tools create a need for Test Automation Specialists and thus foster silos.

Let’s look at each of these concerns in turn, then look at how Agile-friendly tools address them.

Test-Last Automation

Traditional, heavyweight, record-and-playback tools force teams to wait until after the software is done – or at least the interface is done – before automation can begin. After all, it’s hard to record scripts against an interface that doesn’t exist yet. So the usual workflow for automating tests with a traditional test automation tool looks something like this:

  1. Test analysts design and document the tests
  2. Test executors execute the tests and report the bugs
  3. Developers fix the bugs
  4. Test executors re-execute the tests and verify the fixes (repeating as needed)
  5. …time passes…
  6. Test automation specialists automate the regression tests using the test documents as specifications

Looking at the workflow this way, it’s surprising to me that this particular test automation strategy ever works, even in traditional environments with long release cycles and strict change management practices. By the time we get around to automating the tests, the software is done and ready to ship. So those tests are not going to uncover much information that we don’t already know.

Sure, automated regression tests are theoretically handy for the next release. But usually the changes made for the next release break those automated tests (see concern #2, maintainability, coming up next). The result for most contexts: high cost, limited benefit. In short, such a workflow is a recipe for failure on any project, not just for Agile teams. The teams that have made this workflow work well in their context have had to work very, very hard at it.

However, this workflow is particularly bad in an Agile context where it results in an intolerably high level of waste and too much feedback latency.

  • Waste: the same information is duplicated in both the manual and automated regression tests. Actually, it’s duplicated elsewhere too. But for now, let’s just focus on the duplication in the manual and automated tests.
  • Feedback Latency: the bulk of the testing in this workflow is manual, and that means it takes days or weeks to discover the effect of a given change. If we’re working in 4 week sprints, waiting 3 – 4 weeks for regression test results just does not work.

Agile teams need the fast feedback that automated system/acceptance tests can provide. Further, test-last tools cannot support Acceptance Test Driven Development (ATDD). Agile teams need tools that support starting the test automation effort immediately, using a test-first approach.

Unmaintainable Piles of Spaghetti Scripts

Automated scripts created with record-and-playback tools usually contain a messy combination of at least three different kinds of information:

  • Expectations about the behavior of the software under test given a set of conditions.
  • Implementation-specific details about the interface.
  • Code to drive the application to the desired state for testing.

So a typical script will have statements to click buttons identified by hard-coded button ids followed by statements that verify the resulting window title followed by statements to verify the calculated value in a field identified by another hard-coded id, like so:

field("item_1").enter_value("12345")
button("lookup_item_1").click
field("price_1").verify_value("$7.00")
field("qty_1").enter_value("6")
button("total_next").click
active_window.verify_title("Checkout")
field("purchase_total").verify_value("$42.00")

The essence of the test was to verify that ordering 6 items at $7 each results in a shopping cart total of $42. But because the script has a mixture of expectations and UI-specific details, we end up with a whole bunch of extraneous implementation details obfuscating the real test.

(If you’re nodding along, thinking to yourself, “Yup, looks like our test scripts,” then you have my sympathies. My deep, deep sympathies. Good, maintainable, automated test scripts do not look like that.)

All that extraneous stuff doesn’t just obscure the essence of the test. It also makes such scripts hard to maintain. Every time a button id changes, or the workflow changes, say with a “Shipping Options” screen inserted before the Checkout screen, the script has to be updated. But that value $42.00? That only changes if the underlying business rules change, say during the “Buy 5, get a 6th free!” sale week.

Of course, there are teams that have poured resources, time, and effort into creating maintainable tests using traditional test automation tools. They use data-driven test strategies to pull the test data into files or databases. They create reusable libraries of functions for common action sequences like logging in. They create an abstract layer (a GUI map) between the GUI elements and the tests. They use good programming practices, have coding standards in place, and know about refactoring techniques to keep code DRY. I know about these approaches. I’ve done them all.

But I had to fight the tools the whole way. The traditional heavyweight test automation tools are optimized for record-and-playback, not for writing maintainable test code. One of the early commercial tools I used even made it impossible to create a separate reusable library of functions: you had to put any general-use functions into a library file that shipped with the tool (making tool upgrades a nightmare). That’s just EVIL.

Agile teams need tools that separate the essence of the test from the implementation details. Such a separation is a hallmark of good design and increases maintainability. Agile teams also need tools that support and encourage good programming practices for the code portion of the test automation. And that means they need to write the test automation code using real, general use languages, with real IDEs, not vendor script languages in hamstrung IDEs.

Silos of Test Automation Specialists

Traditional QA departments working in a traditional waterfall/phased context, and automating tests, usually have a dedicated team of test automation specialists. This traditional structure addresses several forces:

  1. Many “black-box” testers don’t code, don’t want to code, and don’t have the necessary technical skills to do effective test automation. Yes, they can click the “Record” button in the tool. But most teams I talk to these days have figured out that having non-technical testers record their actions is not a viable test automation strategy.
  2. The license fees for traditional record-and-playback test automation tools are insanely expensive. Most organizations simply do not have the budget to buy licenses for everyone. Thus only the anointed few are allowed to use the tools.
  3. Many developers view the specialized QA tools with disdain. They want to write code in real programming languages, not in some wacky vendorscript language using a hamstrung IDE.

Thus, the role of the Test Automation Specialist was born. These specialists usually work in relative isolation. They don’t do day-to-day testing, and they don’t have their hands in the production code. They have limited interactions with the testers and developers. Their job is to turn manual tests into automated tests.

That isolation means that if the production code isn’t testable, these specialists have to find a workaround because testability enhancements are usually low on the priority list for the developers. I’ve been one of these specialists, and I’ve fought untestable code to get automated tests in place. It’s frustrating, but oddly addictive. When I managed to automate tests against an untestable interface, I felt like I’d slainGrendel, Grendel’s mother, all the Grendel cousins, and the horse they rode in on. I felt like a superhero.

But Agile teams increase their effectiveness and efficiency by breaking down silos, not by creating test automation superheroes. That means the test automation effort becomes a collaboration. Business stakeholders, analysts, and black box testers contribute tests expressed in an automatable form (e.g. a Fit table) while the programmers write the code to hook the tests up to the implementation.

Since the programmers write the code to hook the tests to the implementation while implementing the user stories, they naturally end up writing more testable code. They’re not going to spend 3 days trying to find a workaround to address a field that doesn’t have a unique ID when they could spend 5 minutes adding the unique ID. Collaborating means that automating tests becomes a routine part of implementing code instead of an exercise in slaying Grendels. Less fun for test automation superheroes, but much more sensible for teams that actually want to get stuff done.

So that means Agile teams need tools that foster collaboration rather than tools that encourage a whole separate silo of specialists.

Characteristics of Effective Agile Test Automation Tools

Reviewing the problems with traditional test automation tools, we find that Agile teams need test automation tools/frameworks that:

  • Support starting the test automation effort immediately, using a test-first approach.
  • Separate the essence of the test from the implementation details.
  • Support and encourage good programming practices for the code portion of the test automation.
  • Support writing test automation code using real languages, with real IDEs.
  • Foster collaboration.

Fit, Fitnesse, and related tools (see the list at the end of the post for more) do just that.

Testers or business stakeholders express expectations about the business-facing, externally visible behavior in a table using keywords or a Domain Specific Language (DSL). Programmers encapsulate all the implementation details, the button-pushing or API-calling bits, in a library or fixture.

So our Shopping Cart example from above might be expressed like this:

Choose item by sku 12345
Item price should be $7.00
Set quantity to 6
Shopping cart total should be $42.00

See, no button IDs. No field IDs. Nothing except the essence of the test.

And by writing our test in that kind of stripped-down-to-the-essence way makes it no longer just a test. AsBrian Marick would point out, it’s an example of how the software should behave in a particular situation. It’s something we can articulate, discuss, and explore while we’re still figuring out the requirements. The team as a whole can collaborate on creating many such examples as part of the effort to gain a shared understanding of the real requirements for a given user story.

Expressing tests this way makes them automatable, not automated. Automating the test happens later, when the user story is implemented. That’s when the programmers write the code to hook the test up to the implementation, and that’s when the test becomes an executable specification.

Before it is automated, that same artifact can serve as a manual test script. However, unlike the traditional test automation workflow where manual tests are translated into automated tests, here there is no wasteful translation of one artifact into another. Instead, the one artifact is leveraged for multiple purposes.

For that matter, because we’re omitting implementation-specific details from the test, the test can be re-used if the system were ported to a completely different technology. There is nothing specific to a Windows or Web-based interface in the test. The test would be equally valid for a green screen, a Web services interface, a command line interface, or even a punch-card interface. Leverage. It’s all about the leverage.

Traditional Tools Solve Traditional Problems in Traditional Contexts. Agile Is Not Traditional.

Traditional, heavyweight, record-and-playback tools address the challenges faced by teams operating in a traditional context with specialists and silos. They address the challenge of having non-programmers automate tests by having record-and-playback features, a simplified editing environment, and a simplified programming language.

But Agile teams don’t need tools optimized for non-programmers. Agile teams need tools to solve an entirely different set of challenges related to collaborating, communicating, reducing waste, and increasing the speed of feedback. And that’s the bottom line:Traditional test automation tools don’t work for an Agile context because they solve traditional problems, and those are different from the challenges facing Agile teams.

Related Links

A bunch of us are discussing the next generation of functional testing tools for Agile teams on theAA-FTT Yahoo! group. It’s a moderated list and membership is required. However, I’m one of the moderators, so I can say with some authority that we’re an open community. We welcome anyone with a personal interest in the next generation of functional tools for Agile teams. We’re also building lists of resources. In the Links section of the AA-FTT Yahoo! group, you’ll find a list of Agile-related test automation tools and frameworks. And the discussion archives are interesting.

Brian Marick wrote a lovely essay on An Alternative to Business-Facing TDD.

I discussed some of the ideas in this article in previous blog posts, most notably:

A small sampling of Agile-friendly tools and frameworks:

  • Ward Cunningham’s original Fit has inspired a whole bunch of related tools/frameworks/libraries includingFitnesse,ZiBreve,Green Pepper, and StoryTestIQ.
  • Concordion takes a slightly different approach to creating executable specifications where the test hooks are embedded in attributes in HTML, so the specification is in natural language rather than a table.
  • SeleniumRC andWatir tests are expressed inRuby; Ruby makes good DSLs.

Are you the author or vendor of a tool that you think should be listed here? Drop a note in the comments with a link. Please note however that comment moderation is turned on, and I will only approve the comment if I am convinced that the tool addresses the concerns of Agile teams doing functional/system/acceptance test automation.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值