How to Succeed at Automated Testing

The following is based on interviews with Richard Bornet, the founder of the Scenario Tester process, and co-author of "Scenario Tester". He discusses the origins of both, how they were developed, and how they can help the current problems with automated testing.

 

Q. How did you get involved in the Scenario Tester approach?

I first became versed in automated testing after years of experience developing and testing software. It became quite apparent to me that the customary way that we were testing our applications was simply not adequate to the task. Although the right tools were sometimes there, they were being used in the wrong way, which is almost worse than not having any tool at all! That set me on the road to coming up with a 'better way" to perform automated testing. During the past 5 years, my associates and I have been implementing and refining the "Scenario Tester" approach.

 

Q. Has the "Scenario Tester" approach been proven?

To date, this approach has been successfully adopted by over a 100 users in many companies including several major corporations. These users have found relief from many of the doldrums in their job and the corporations have found happier employees, faster release cycles with more effective testing performed - in short, greater productivity.

 

Q. For those of our readers without this background, what exactly is automated testing?

To answer this, let's take a step back and look at "manual testing" first.

Most of us have experience with this. You have an application in front of you and a tester than sits at this application, enters data and checks to see if all the functionality is working, all the business rules are met and all the expected results are obtained.

Well, that's the principle behind automated testing. Except instead of a human tester doing the entry, a testing tool is doing it instead. These marvelous tools allow you to build scripts, which you can play back every time you get a new build or a change in an application. As they play back, you can now see yourself putting your feet up on the desk and watching the tools test your application.

If only it were that simple.

In practice, automated testing usually consists of a sophisticated engine that drives scripts to enter data into the application under test, and checks results.

 

Q. So automated testing is the solution...

There are lots of benefits to automated testing, but there are landmines as well to be aware of. These landmines have destroyed more than a project or two. That's why we needed to take it to the next level.

 

Q. What's so great about automated testing? Can't we just stick with manual testing?

Scripts are like having extra testers. They can re-test functionality over and over again without getting tired or bored. They don't go to meetings, take coffee breaks or answer e-mail. They don't chitchat and don't mind staying late. Scripts are consistent and don't skip a beat or cut corners. Finally scripts do not burn out on the job.

So if you have only a few hours to test, you can run scripts on several machines and re-test significant amounts of functionality.

If a tester quits, which happens too often, that person's know-how is left in the scripts. The next tester will have consistent, reliable tests to execute. You have moved that knowledge from one tester to the next.

 

Q. Ok, I admit that's helpful, but is automated testing really essential?

I have seen projects that would have failed without automated testing. For example in one project, 70 page proposals were created which were legal documents. It was important that every word be exactly correct. Manually this was extremely difficult. It took so long to manually proof read these documents that very few proposals were created and tested. Costly errors were getting through.

This task was automated. Over 70 proposals were constructed and tested fully for each new build in about three hours. There were seven new builds and all 70 proposals where run each time! This compared to 3 proposals run two or three times manually.

On another project, the system created leases. Over 20 different leases had to be tested for each build. The programmers were making multiple adjustments all day. It took over 25 minutes to create a lease manually. Our scripts created them in 7.5 minutes each. They were run on multiple machines and all 20 could be created and validated in under half an hour. Compare that with 8.3 hours if done manually! Programmers would make an adjustment and say "please run leases 3, 6 and 14" or even "run all of them!". Because of the way the scripts were set up, it was just point and click to select what needed to be run. One tester handled all of this with much time to spare. It didn't even interfere with her need to take smoking breaks.

For another client, we wrote a script to test whether all the screens were accessible in a mainframe application. It took 2.5 hours to test manually. It took 7.5 minutes by script!

Having successfully implemented automated testing in many settings, I cannot imagine not using it on a project.

 

Q. So there just isn't enough time to test thoroughly without automated scripts?

Right. If you have large amounts to test and not enough resources, automated testing is the best if not only solution available to you.

 

Q. But is there a downside?

As I implied earlier, things don't always work out so wonderfully. Studies have shown that most automated testing efforts are shelved within 12 months.

The companies that continue to use them are often left with hobbled scripts. They are too difficult to modify, or expand to meet changing needs. Quite often all that is left after a few years is a shell that performs a few basic actions, and it isn't used very often.

 

Q. Why is this?

There are three major reasons why automated testing fails:

  • Scripts are written in such a way that they are not maintainable. 
  • It is too difficult to handle the test data. 
  • The right people doing the wrong job, and the wrong people doing the right job!

I want to talk about all three in some detail. Later I want to discuss solutions to the problems.

 

Q. Ok, let's discuss those one by one. You mentioned scripts...

O no, techie-speak! No, no. [Laughs]. This will be very straightforward, which every one will be able to understand. I include it to show how easy it is to derail the automated testing effort. So then, when you understand what problems can occur, you will better understand what the solutions are. The code will be from Rational Robot, an automated testing tool, but the same principles apply for the other major automated testing tools.

The traditional way to created automated testing scripts is to turn a record function on, then go to the application and enter data, move from screen to screen, test for expected results. The theory being, that you then play back what you have recorded.

 

[Turns to workstation and starts the Microsoft calculator program and brings up Rational Robot].

So let's turn on the tool and record some scripts. Here is what you get.

 

Window SetContext, "Caption=Calculator", ""
Window SetContext, "Caption=Calculator", ""
PushButton Click, "ObjectIndex=7"
PushButton Click, "ObjectIndex=11"
PushButton Click, "ObjectIndex=15"
PushButton Click, "ObjectIndex=6"
PushButton Click, "ObjectIndex=10"
PushButton Click, "ObjectIndex=14"
PushButton Click, "ObjectIndex=5"
PushButton Click, "ObjectIndex=9"
PushButton Click, "ObjectIndex=13"
PushButton Click, "ObjectIndex=8"
PushButton Click, "ObjectIndex=20"
InputKeys "9876543210"
PushButton Click, "ObjectIndex=21"

Result = LabelTC (CompareText, "ObjectIndex=1", "CaseID=TEST2A;Type=CaseInsensitive")

Ok, folks what does this script do? Yes it has something to do with calculator, but what specifically?

 

Q. I see what you mean. It looks like a bunch of gobbledygook.

Right! Here's some news: the person who recorded it also doesn't know what it does. [laughter]. He comes back to it in three weeks time and goes Huh? I have seen scripts that look like this, which are hundreds of lines long. Pray that you never have to adjust them. Bur guess what, you will have to adjust them! You always have to adjust scripts, especially those which are incomprehensible.

 

Q. But that's a function of the test tool. How do you avoid it?

I have developed a neat solution. It's called commenting. No I know what you are thinking. Just stay with me. We all know that we are supposed to comment our code but that's got it backwards. In the previous scenario the code is what drives the process. Comments are often put in after the fact. I think this is backwards. I think you should put in your comments and then add code; so the scripts are organized by comments, with code interspersed.

Let's do that to the previous code.


'Enter 1234567890 by clicking on numbers
'Click on 1
Window SetContext, "Caption=Calculator", ""
PushButton Click, "ObjectIndex=7"


'Click on 2
PushButton Click, "ObjectIndex=11"
'Click on 3
PushButton Click, "ObjectIndex=15"
'Click on 4
PushButton Click, "ObjectIndex=6"
'Click on 5
PushButton Click, "ObjectIndex=10"
'Click on 6
PushButton Click, "ObjectIndex=14"
'Click on 7
PushButton Click, "ObjectIndex=5"
'Click on 8
PushButton Click, "ObjectIndex=9"
'Click on 9
PushButton Click, "ObjectIndex=13"
'Click on 0
PushButton Click, "ObjectIndex=8"


'Click on +
PushButton Click, "ObjectIndex=20"


'Enter 9876543210 by typing
InputKeys "9876543210"


'Click on =
PushButton Click, "ObjectIndex=21"


'Test result of addition
Result = LabelTC (CompareText, "ObjectIndex=1", "CaseID=TEST2A;Type=CaseInsensitive")

You must admit that there is no doubt now what this script does. Notice how the all the comments are left justified and your eyes read the comments first, then you look at the code, which is always indented. In reality you don't look at all the code, only the code you are interested in. You may say this is very time consuming to put all those comments in. NO it's not. I do this all day. Yes, I have a life. [Laughs] It is quite quick and the payoff is enormous.

If all I do from all this talk is to teach you how to use this commenting method, I have given your scripts a significant boost.

 

Q. But there has to be more to good scripting than comments...

You're right. Now that we have got comments out of the way, we have other problems. I was taught when I took the SQA Team Test course, for example, that you should record one script to test an individual test requirement.

Let's take an example where you want to enter different information into clients' files. The first step is to find the client. You may have 50 different test requirements, which means you have coded the find screen 50 times. Then the developer changes the names of the objects on the find screens. Oops. You now have to change 50 scripts to keep them running. Next thing you know, you have headlines "Crazed Tester Kills Computer Programmer".

In reality, what happens is that the tester fixes one or two scripts, and the rest die. She has every intention of getting back to them shortly, like in 2006. [laughs] I have seen this in company after company. If all the scripts haven't died, then they have one or two, which they use over and over again.

 

Q. I would imagine that you have a solution for this?

You're right again. The solution is to divide the application into small parts; it could be by screen or by module, and then record generic routines for each part. These routines use variables instead of hard coded values. For example, every time you needed to enter some data into a particular screen, you would call a particular script and pass it that bit of data. Below is a sample script used against a mainframe.


'Input Contract_Enter_Customer_Code
InputKeys Contract_Enter_Customer_Code
InputKeys "{Enter}"
DelayFor Delay_between_Fields

'Input Contract_add_new_contract_number
InputKeys Contract_add_new_contract_number
InputKeys "{Enter}"
DelayFor Delay_between_Fields

'Input Contract_enter_customer_number
InputKeys Contract_enter_customer_number
InputKeys "{Enter}"
DelayFor Delay_between_Fields

 

Q. Ok, I've heard of that approach before. It's called data-driven testing. Is that the same as the Scenario Tester approach?

It is a fundamental part of the Scenario Tester approach, but not the whole thing.

Creating modularized data-driven tests is about the state-of-the-art today. It's pretty much accepted wisdom that this is the format you should use when creating automated tests. When I taught automated testing, I highly encouraged it. "Separate the data from the scripts."

Unfortunately in the real world it is not quite as simple.

 

Q. What's wrong with data-driven testing?

The question is where do you keep your data. The usual answer is in a CSV (comma-separated values) file. This file will look like this:


Contract 1, 4321,123456,45627,44,15,Lease,19,3435353
Contract 2, 4322,123457,45627,44,15,Loan,33,3435355
This file has nine variables and is for demo purposes only. Just picture a file with hundreds, let alone thousands of variables. You can't,卭r at least it's quite hypnotic; data and commas; data and commas; and then more data and commas. Billions and billions and billions. Well that's a bit of an exaggeration, but you get the point. Changing something is very, very time consuming, adding some new variables in the middle of your data is impossible, and if you missed a comma somewhere, hope they allocated you to the "finding a needle in a hay stack" brigade for the next week. [Laughter] It will be easier and have more chance of success.

 

Q. I can see your point. But how can we avoid this, and still separate the data from the test scripts?

My solution to this was to enter the names of the fields in between the data. It looks something like this:


Test Requirement name, Contract 1, Customer No, 4321, Account number, 123456, Application No, 45627,Age,44
This makes the file quite understandable but there were two major problems. One was if you missed a comma, it still was very laborious to figure it out. The second was that if you needed to add some fields in the middle it was very problematic. Let's say you needed to add 400 characters. You would do this by search and replace. The average length of a search string in most editors is 128 characters. This means you would have to do search and replace four times. It's very hard to get it right. You always blow something. Then it is debug time. I had one project completely die, because they could not handle the CSV file.

 

Q. What about Excel?

That's fine if you have only 256 fields and I haven't had an application like that... ever. In addition, you have to scroll left and right all the time, which gets very frustrating. Don't believe me?? Fill about 400 rows with data all the way from A1 to IJ400. Then by scrolling only, go to AB234, then GT100, then IA27, then D387?. Now do this for a couple hours. Constantly moving around a spreadsheet can get very tiring and frustrating.

Some projects use multiple Excel spreadsheets but this dramatically increases the complexity. Without programmers, the chance of success is limited.

Some of the test tools have built in systems, which allow you to store data in spreadsheet-like formats. Again they are complex and these efforts often die, if you don't have experienced programmers.

 

Q. And the test data problem is often overlooked...

And that's the crux of the problem. You want to be able to easily control your test data. Not only that, you want the testers themselves to control the test data. You want to make this as simple as possible for them, without them having to become professional coders. Actually, ideally you want to make it simple enough that any individual, who knows the business, can create the test data without much difficulty.

If the test data is too complex to handle, a major automated test effort can be derailed.

 

Q. Ok, you've covered the first two reasons why automated testing projects fail. What about the people issue? Is it a training issue?

The prevalent belief is that testers can be up and running, using the automated tool, fairly quickly after having taken a three to five day course. You know, just record and play it back. I only wish it were true. The fact is that they are recording code - programming code. To convince everyone that it is simple, we don't call it coding or programming. We call it scripting. Changing the name immediately makes it simpler!

Using the same logic, it isn't differential calculus anymore, it's letters with superscripts. It's not molecular biology anymore, it's very small health studies. [Laughter]. It is difficult for most people to become proficient programmers after a three-day class with no previous programming experience.

 

Q. Ok, "scripting" is a euphemism. But back to testers...

Good testers usually know the business. They are often drawn from the business side, not from the ranks of the developers. They are not professional coders. They often don't want to become professional coders. If they wanted to be programmers, they would have done that by now.

So now, you want to take them and turn them into coders? If you even partially succeed then what you have often done is lose an excellent tester and gained a merely passable coder! I have noticed that what happens much more often, is that the initial enthusiasm wanes, and then the coding effort lags, and manual testing wins out. There are always really good excuses, usually boiling down to arguing time constraints.

Sometimes it is legitimate where the tester must chose between scripting and manual testing, and manual testing wins out. Other times they focus on other activities and resist writing code. The effect is the same; the automated scripting effort dies.

 

Q. So what you're saying is that traditional automated testing boils down to a coding project? Why not use programmers then?

Problem number one is that most programmers don't want to be automated testers. They don't want to "script', they want to program real code like file transfer protocols, or printer drivers. The excitement just gushes then! [Laughter]. But all humour aside, programmers often don't want to become automated scripters. Although it is programming, most don't view it this way.

But if you get a programmer, even a junior programmer, you still have to teach that person the business. For some applications this is doable. But I have seen some applications, especially mainframes, where there are such complex business rules, that it can take years to learn all the business.

 

Q. So it's now a catch-22. You can't teach real testers to be superior coders, and you can't teach coders to be superior testers...

You now know the dilemma. Take a tester and teach them to program or take a developer and teach them the business. Not a completely satisfactory solution either way. But there is a solution.

 

Q. And that solution is...

Let's just backtrack for a minute. That's just one of the problems we raised so far with traditional automated testing that lead to failure. Let's review some of them...

  • The scripts are incomprehensible and too complicated. 
  • The scripts are not robust, not re-useable and don't follow a good set of standards. 
  • There are too many scripts to change when there are changes in the application-under-test.
    • You can't control large amounts of test data adequately.
    • You employ professional coders who don't understand the business well enough.
    • You employ professional testers who don't understand programming well enough.
    • After completion, the scripts cannot be easily maintained by a junior coder or a computer savvy non-coder.

Any of these events could derail an automated testing effort and more often than not does. If you have any experience with automated testing you will be able to relate to this. And all of these problems exist because of a single reason: the division of labour - who does what.

 

Q. Hold on, this is kind of shocking. You're saying that a major reason why automated testing fails, is because of a simple thing like that?

Look at it this way. For almost any product out there, there are three groups of distinct individuals involved. They are builders, users and maintenance people.

Let's take cars for example. We have car builders; they are called Ford, Toyota, BMW, Ferrari and so on. They employ people and operate factories to design and build the cars. Hopefully, the cars are designed with input from users. Users are the largest group of people. They buy and drive the cars. If something goes wrong with the car, the user doesn't take the car back to the factory, but takes it to a maintenance person who fixes the car.

Another example, houses, are the same. Architects and builders build the houses. Users then buy them and live in them. If something goes wrong, like the roof leaks, then the user calls a maintenance person, who then fixes the problem. Users are still the most important people since they buy and live in houses.

Software isn't much different. We often employ a large group of people we call developers to design and build the software. Many of these developers are hired on contract for this particular phase. The software then allows the user to perform whatever task they need to perform. Finally upgrades and fixes are usually performed by a group of maintenance programmers, which is often much smaller than the original group of programmers who built the software. Again, the users utilize the software, without having any knowledge of how the software was built or is maintained.

Look at almost all products and you will see that this division of labor exists. The builder, user and maintenance paradigm is probably in existence because it works. It is the fastest, most efficient process and produces the best results. It is there because millions of people have learned through experience that it is the most effective way to create a product.

This paradigm is true for everyone, except when it comes to automated testing! (Laughs). Here we make the user be the builder and the maintenance person at the same time!

If you want the tester to build, use and maintain the automated testing software scripts, it is possible. It is possible for people to build their own cottages, but it is a labor of love and can take years! Is it possible for you to do a major renovation of your house? It will save you money and I'm sure you can finish it in three weeks so it will only be a MINOR inconvenience to your living arrangements. [Laughs]

Sometimes you can find individuals, who can perform all three tasks effectively, but the probabilty that the majority of your testers have the time and skill to do all three is remote.

Automated testing scripts are a product. For automated testing to succeed you need to do a division of labor into builders, users and maintenance. This will help utilize people much more efficiently and allow you more chance of success.

 

Q. So you're saying the implementation process is a major part of the problem itself?

In a nutshell, correct. We've really accepted unquestioningly a very untenable situation. I'll take it a step further for you. In a really workable world you want the following events to take place:

  • Testers use the power of automated testing without having to script. 
  • Testers easily handle large amounts of test data and scenarios. 
  • Coders code automated testing scripts without having to understand the business. 
  • You create easy to understand scripts with a coherent set of standards. 
  • You produce scripts which can be easily maintained by a junior coder or a computer savvy non-coder.

This is really a mind-blower, a holy-grail almost. If you could pull off these events off, then your project's chance of success is huge!

Q. But seriously, isn't this just a pipe dream?

This is why we built Scenario Tester, so we can deliver these results. But I'll explain that further on.

We knew that we had to find a way for testers to be able to use the power of automated testing without having to script. The answer is in watching what testers actually do. Testers enter actions or data in an application, and then test for expected results. We thought, could there not be a way to duplicate this activity? Would it not be nice if the testers could just say, "Here are my inputs and here are my expected results, now go test the app for me"?

Every tester out there is saying, "Yah, Right!" And I would like to be paid a million dollars to test and I want to create world peace and end world hunger while I am at it. (Laughs) But hold on there. Why is this idea so preposterous? I mean you can imagine going to another individual and explaining to that person what needs to be tested and giving them the expected result and then that person goes out and performs that task. Now we just need a program to do it instead.

Q: So you took a look at the testing process, and came up with a way to automate that?

Right. Two issues have to be solved; how does the tester communicate what needs to be tested, and how do we organize a program to perform that task.

Let's s solve the first problem. When testers start testing, they tend to go through the same thought pattern. They first have to decide what it is that they want to test. Whether you call these test cases, test requirements, use cases or test scenarios, it doesn't matter. Then they think, which parts of my app will I use? Finally they think, what are my specific inputs, what action will I take and what are my specific expected results? It really doesn't matter what you are testing, the process tends to be the same.

We decided that there must be a way to create an interface that mimics this.

We wrote a program to do that and named it "Scenario Tester". Scenario Tester is a front end, which duplicates the data interface of the application under test and presents it to the tester in one screen.

[Positions to screen and brings up Scenario Tester]

 

  1. List of Scenarios: these are the scenarios you want to test. Scenarios are usually representative of the big processes in the system such as "processing an order with a new customer" or "processing an order with an existing customer".
  2. List of Screens: these are the parts of the application where you want to enter data or test for expected results. These map pretty much to individual screens, but they can be used for multiple screens as well.
  3. List of Fields: these are all the fields and the actual data you want to enter and the expected results. These are pretty much the actual field names on the screen. And the word "field" we use loosely for any object that data is entered into, or in which a selection is made, or from which data is extracted for testing.

For every application that we have worked on, whether mainframe, GUI or Web, we have managed to represent the entire application in this one screen in Scenario Tester. It gives us no end of pleasure to show developers that the 300 screens they spent a year to develop, we managed to display on one screen in a very short period of time. The testers then enter their test data into the "List of Fields". We like to call it  "Testing by filling in the blanks".

The issue now becomes how easy is it for testers to learn, and how quickly can they enter their scenarios by entering the field data. The answer to both is overwhelmingly positive.

Testers pick up the interface almost instantaneously. The learning curve to use Scenario Tester is almost non-existent. People are up to speed in minutes. We had one instance where we went to train a person on how to use Scenario Tester and we were about ten minutes late. She told us that she had already entered the first five scenarios and had no questions.

The other aspect is that once you create the first few scenarios, the rest of the scenarios are often nuances of the first few. These scenarios can be created by copying previous scenarios and then making adjustments to the data. We had one tester who created over 250 scenarios for Y2K testing in less than three hours.

It still amazes us to see testers live in Scenario Tester as they create and execute scenarios. It is a simple process, where they create the scenarios, run them and look at the logs to get the expected results. Most testers no matter how junior catch on almost instantaneously.

Q. So that's great for controlling the data. I can see how that helps. But what about the scripting part?

The testers control the automated testing process, without having to look at any code. Someone still needs to write the code, which makes the whole process work.

Generally it shouldn't be a tester who writes the code. If they are so inclined they can do it, but it is not usually efficient. Remember our previous discussion that there are users, builders and maintenance people. The testers are the users. Now you need to allocate individuals to build the background scripts.

Q. So the tester doesn't really script. How much staff should be devoted to scripting?

The resources needed vary from application to application. They vary with the number of screens and the complexity of the application.

Then there are issues which crop up with the creation of any code: testers will tell you that they want this, then they change their mind; you have problems with the recognition of certain objects; the developers change the application and you need to re-work the scripts; and finally delays in getting proper test data from the testers. All these affect the time it takes to build scripts.

What you will need is a dedicated coder or team of coders to build the scripts. If you chose to get an external team to build the code, it is advisable that if these scripts will be maintained by an internal person, to have that person on the coding team.

Q. So there is a coding effort involved. What ensures that the scripts meet the objectives for being robust and re-usable, as well as easy-to-understand?

The scripts are built generically. They follow a fixed set of standards, which makes them quite simple, logical and reusable. If you follow our standards, which are built around a logical system of comments, you will find that you have elegant and comprehensible scripts. Many times we can create 90 - 95% of the scripts from a template, without a programmer's aid. That's an amazing time and cost saving.

As always, there are some scripts where you have to do some extra coding. Some of this coding can be fairly complex. This is usually code which once written, is seldom changed. Most of the changes that maintenance people will have to do, is in the 90 - 95% of scripts which are straight-forward. An example of a more complicated code could be a control, which is not properly recognized by the automated testing tool, and needs a special function to be written so it can be recognized.

Q. From what I gather so far, this solves the dilemma of the testers who can't code and the coders who can't test.

The strategy we employ is to code a part of an application, and then set-up Scenario Tester so the testers can enter scenarios. This gives a fairly quick turnaround where the testers have scripts they can run to test the application in a short period of time.

We have found that if you do not have access to the business person/tester who really, really knows the application, then all your time lines will go out the window. This is the single biggest reason for delays. Ideally an expert tester/business person should be allocated to the build stage and not have any other activities. We cannot over emphasize how important this is.

Q. So with this strategy you've able to leverage the strengths of the business user and tester, as well as the coder, and get them to do what they do best...

Because of the generic nature of the scripts, the coders who create them do not have to be completely familiar with the business logic of the application. To script in such a way that will be useful to the testers, there needs to be cooperation between the testers and coders. You need someone with the business knowledge to tell the coders how the application works, what are the functions and navigational flow, and provide them with scenarios which have test data. But yes, that is the major benefit of our scenario testing approach.

It also allows you to hire external coders to help you complete the build phase.

Q. What happens when the test scripts have been implemented?

Once the scripts have been written and scenarios have been entered, you enter the Maintenance phase. Hopefully one of the initial coders will perform that function. Given that the code is written in very straightforward fashion and uses a good set of standards, maintenance is usually a straightforward activity.

Maintenance is performed when new functionality has been added to the application, or changes in the application force changes in the scripts, or new scenarios involve additions or changes to the scripts.

We have found that when we leave a company, it is a junior coder or fairly sophisticated tester who takes over the scripts. This person often has to maintain code for several applications. If this is true, it is very important to create scripts which are simple, easy to understand and fix, and a set of standards that makes sense.

Q. Does Scenario Tester work with all testing tools?

There are many automated tools of very good quality out there. Since we have separated the test data from the scripts, the tool you select is up to you. The main function of the tools is to recognize the objects in the application. So choose the one that does this best.

We most often use Rational Robot for GUI and Web testing and Visual Basic when we test mainframes. But any of the tools will do the job. We've worked with QARun, WinRunner and TestPartner as well as Rational and Visual Basic.

We have included sample scripts for QA Run, TestPartner, WinRunner and Robot with the application.

Q. How can readers learn more about the Scenario Tester approach and Scenario Tester?

Well, I think so far, they've found out that Scenario Tester approach is both a process and a program. They've seen that it promotes a highly effective data-driven structure, and it leverages the strengths of the coders and testers. It leads to automated testing that works, is maintainable, lasts longer and is actually used! And that's money in the bank any way you cut it.

In my next interview I'll be discussing exactly how you can use the Scenario Tester program to implement testing most effectively. You'll see who does what, and how they do it!

Q. Thank you for your time Mr. Bornet.

Thank you and your readers for listening.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值