I've noticed two styles I've used on my last, first TDD, project. The first is from when I was newer, and I would write one, absolute minimum test and make it pass, then move on to another test. Now, still taking small steps, I often find myself writing a complete test, with say five asserts, and then incrementally adding to the method under test to progress past each assert, until all pass. Any comments on this approach?
============
I've done both, and I feel that the flow is better with tiny tests - but I still find myself doing incremental tests from time to time. One technique that I've found helpful in keeping the tiny test approach fluid is use of setup() - and that got easier after Dave Astel's book taught me to write test classes per purpose, rather than per class. So tests that need different setups wind up in different test classes. When I was still in one-test-class-per-class mode, I was more inclined to write bigger test functions, because what belonged in setup was getting duplicated in each test. Utility functions also come in handy - encapsulating the assertions that you want to make more generally. For me, there's kind of a tradeoff between utility functions and one-test-per-purpose, because if I want the utility function to do assertions, then it should belong to a subclass of TestCase, and if I want my test classes to call those functions directly, then they want to subclass my utility class, and that soon becomes more structure than I want in my test classes.
================
One test-class per purpose, and not per class? Please elucidate. What defines a purpose, and how do you organize your tests? Does one class have many purposes, or does one purpose span many classes?
===============
A simplistic, mechanical answer - and one that serves me reasonably well - is that the setup defines the purpose. If I need different setup, that drives me to create a different test class. I don't have Astels' book in front of me, but I do recommend it for its explanation of this, much better than I can do here. My practice is not as absolute as maybe my remarks suggested - sometimes my test classes will include cases with different (inline) setup. But when I see too much of that, or duplicated setup in multiple tests - that's the code asking me to sprout a new test class. Or a utility function, depending. On what, I can't really say.
Well - the tests are never far from the class (same package, suffixed .test), and Eclipse lets me find references. So I never have a problem finding the test class. Regarding naming - take Range, for example. I might have RangeTest, covering contains(), isEmpty(), encompass(), which setup includes a single Range, or a few Ranges for which the functions can be evaluated, plus RangeInteractionTest, which tests intersects(), intersection(), and other - well, Range Interactions. I'd have to look at my code to find real examples; no time right now. But that should give you an idea. Really, though - check out the book.
============
from Dave Astels' Test-Driven Development: A Practical Guide, pp 74-76: Let's begin by considering TestCase. It is used to group related tests together. But what does "related" mean? It is often misunderstood to mean all tests for a specific class or specific group of related classes. This misunderstanding is reinforced by some of the IDE plug-ins that will generate a TestCase for a specified class, creating a test method for each method in the target class. These test creation facilities are overly simplistic at best, and misleading at worst. They reinforce the view that you should have a TestCase for each class being tested, and a test for each method in those classes. But that approach has nothing to do with TDD, so we won't discuss it further. This structural correspondence of tests misses the point. You should write tests for behaviors, not methods. A test method should test a single behavior... ...TestCase is a mechanism to allow _fixture_ reuse. Each TestCase subclass represents a fixture, and contains a group of tests that run in the context of that fixture. A fixture is the set of preconditions and assumptions with which a test is run. It is the runtime context for the test, embodied in the instance variables of the TestCase, the code in the setUp() method, and any variables and setup code local to the test method... ...Instead of using TestCase to group tests for a given class, try thinking about it as a way to group tests that need to be set up in exactly the same way. A measure of how well your TestCase is mapping to the requirements of a single fixture is how uniformly that fixture (as described by the setUp() method) is used by all of the test methods. Whenever you discover that your setUp() method contains code for some of your test methods, and different code for other test methods, consider it a smell that indicates that you should refactor the TestCase into two or more TestCases. Once you get the hang of defining TestCases this narrowly, you will find that they are easier to understand and maintain.