By the time a piece of software hits your hard drive, it's safe to say that hundreds or even thousands of hours have been invested in its creation. A key part of the development process is testing, which should cover both how the application is structured and whether or not it does what it has been designed to do. These processes are often referred to as white box and black box testing.
Boris Beizer, a Huntingdon Valley, Penn.-based veteran software tester and author of several books on the subject including Black Box Testing, Software Testing Techniques and Software System Testing and Quality Assurance, dislikes the terms "white box" and "black box" because to him they sound too simplistic and mysterious. Beizer prefers the terms structural and behavioural testing.
White box or structural testing is designed to test the way that a piece of software is built.
"It has to be done by a programmer who knows about the guts of the software," Beizer said, explaining why synonyms for this methodology include clear box or glass box testing.
Black box or behavioural testing is a methodology that treats the system as if it were a black box, with no knowledge of the internal code being used during the testing process, and generally focuses on testing functional requirements. This process is also known as functional testing, opaque box testing and closed box testing, and is often completed by someone other than the developer.
"An application can be complete, highly automated and have good theory behind it, but might be totally irrelevant to what it's supposed to do. Imagine testing an application that's supposed to be a word processor, but has been developed as an accounting package. Every instruction that's been written reacts in the way that the developer designed it, but the application doesn't behave the way that it's supposed to for it to be a word processor. That's behavioural testing," Beizer said.
These testing methodologies are complementary, Beizer said, noting that both are done in any well run organization.
"What process you're using depends on where you are in the development cycle," he explained. "Early in the cycle, when you're dealing with small components, you rely mostly on structural testing executed by the programmer, but once you're closer to the end of the cycle, you rely more on behavioural testing."
The key to good testing, Beizer said, is knowing when to say when. Software testing and quality assurance is philosophically very different than testing that has traditionally worked well for other industries including manufacturing.
"You've got to get to the point of making a trade-off with software - you're going to ship software with bugs, because it's not always cost effective to make every little fix. For a company like Boeing, they should be spending billions to fix every single bug, but should a company shipping a word processor spend billions to fix every bug that might affect a few users with a certain configuration? That's where good companies do the trade-off and try to debug to the point of diminishing returns," he said.
Several years ago, Beizer worked with WordPerfect, which he said at the time had a reputation of the best software quality on the market. He used the Word Perfect product along with his Brother laser printer, which he discovered had an Epson printer emulator mode that would make his laser printer act like a dot matrix printer. He discovered that when using printing in this mode, the justification in his WordPerfect documents would be off, which he attributed to a bug in WordPerfect's printer driver.
"I was able to tie that down, and went to WordPerfect with the problem. They came back to me and said that they had studied the bug and came to the conclusion that I was the only person in the world to ever use their laser printer to look like a dot matrix and that it was unlikely that anyone else would ever run into the problem, so they said they weren't going to fix it. That was a wonderful answer, because a good company's philosophy should be to expect bugs," he said.
Alex Samurin, a Toronto-based tester pointed out that besides the traditional black and white box testing, grey box testing has become important for Web and Internet applications. Grey box testing is a strategy based partly on the internals of an application; testing is done from the outside of the product, as in black box testing, but testing choices are informed by knowledge of the application's underlying components. According to Samurin, this grey box methodology is fairly new, and is not yet fully embraced by the testing community, but widespread acceptance is in many ways irrelevant.
"From my point of view, it doesn't matter what colour the cat is as long as it catches the mouse," Samurin said.
Boris Beizer, a Huntingdon Valley, Penn.-based veteran software tester and author of several books on the subject including Black Box Testing, Software Testing Techniques and Software System Testing and Quality Assurance, dislikes the terms "white box" and "black box" because to him they sound too simplistic and mysterious. Beizer prefers the terms structural and behavioural testing.
White box or structural testing is designed to test the way that a piece of software is built.
"It has to be done by a programmer who knows about the guts of the software," Beizer said, explaining why synonyms for this methodology include clear box or glass box testing.
Black box or behavioural testing is a methodology that treats the system as if it were a black box, with no knowledge of the internal code being used during the testing process, and generally focuses on testing functional requirements. This process is also known as functional testing, opaque box testing and closed box testing, and is often completed by someone other than the developer.
"An application can be complete, highly automated and have good theory behind it, but might be totally irrelevant to what it's supposed to do. Imagine testing an application that's supposed to be a word processor, but has been developed as an accounting package. Every instruction that's been written reacts in the way that the developer designed it, but the application doesn't behave the way that it's supposed to for it to be a word processor. That's behavioural testing," Beizer said.
These testing methodologies are complementary, Beizer said, noting that both are done in any well run organization.
"What process you're using depends on where you are in the development cycle," he explained. "Early in the cycle, when you're dealing with small components, you rely mostly on structural testing executed by the programmer, but once you're closer to the end of the cycle, you rely more on behavioural testing."
The key to good testing, Beizer said, is knowing when to say when. Software testing and quality assurance is philosophically very different than testing that has traditionally worked well for other industries including manufacturing.
"You've got to get to the point of making a trade-off with software - you're going to ship software with bugs, because it's not always cost effective to make every little fix. For a company like Boeing, they should be spending billions to fix every single bug, but should a company shipping a word processor spend billions to fix every bug that might affect a few users with a certain configuration? That's where good companies do the trade-off and try to debug to the point of diminishing returns," he said.
Several years ago, Beizer worked with WordPerfect, which he said at the time had a reputation of the best software quality on the market. He used the Word Perfect product along with his Brother laser printer, which he discovered had an Epson printer emulator mode that would make his laser printer act like a dot matrix printer. He discovered that when using printing in this mode, the justification in his WordPerfect documents would be off, which he attributed to a bug in WordPerfect's printer driver.
"I was able to tie that down, and went to WordPerfect with the problem. They came back to me and said that they had studied the bug and came to the conclusion that I was the only person in the world to ever use their laser printer to look like a dot matrix and that it was unlikely that anyone else would ever run into the problem, so they said they weren't going to fix it. That was a wonderful answer, because a good company's philosophy should be to expect bugs," he said.
Alex Samurin, a Toronto-based tester pointed out that besides the traditional black and white box testing, grey box testing has become important for Web and Internet applications. Grey box testing is a strategy based partly on the internals of an application; testing is done from the outside of the product, as in black box testing, but testing choices are informed by knowledge of the application's underlying components. According to Samurin, this grey box methodology is fairly new, and is not yet fully embraced by the testing community, but widespread acceptance is in many ways irrelevant.
"From my point of view, it doesn't matter what colour the cat is as long as it catches the mouse," Samurin said.