Write test classes per purpose, rather than per class

本文讨论了两种测试驱动开发(TDD)的方法:一种是每次只写一个最小的测试并使其通过;另一种是一次写多个断言的完整测试,并逐步实现。文章还介绍了如何通过使用setup()方法和针对不同目的创建不同的测试类来保持测试的简洁性和组织性。
摘要由CSDN通过智能技术生成

I've noticed two styles I've used on my last, first TDD, project. The first is from when I was newer, and I would write one, absolute minimum test and make it pass, then move on to another test. Now, still taking small steps, I often find myself writing a complete test, with say five asserts, and then incrementally adding to the method under test to progress past each assert, until all pass. Any comments on this approach?

============

I've done both, and I feel that the flow is better with tiny tests - but I still find myself doing incremental tests from time to time. One technique that I've found helpful in keeping the tiny test approach fluid is use of setup() - and that got easier after Dave Astel's book taught me to write test classes per purpose, rather than per class. So tests that need different setups wind up in different test classes. When I was still in one-test-class-per-class mode, I was more inclined to write bigger test functions, because what belonged in setup was getting duplicated in each test. Utility functions also come in handy - encapsulating the assertions that you want to make more generally. For me, there's kind of a tradeoff between utility functions and one-test-per-purpose, because if I want the utility function to do assertions, then it should belong to a subclass of TestCase, and if I want my test classes to call those functions directly, then they want to subclass my utility class, and that soon becomes more structure than I want in my test classes.

================

One test-class per purpose, and not per class? Please elucidate. What defines a purpose, and how do you organize your tests? Does one class have many purposes, or does one purpose span many classes?

===============

A simplistic, mechanical answer - and one that serves me reasonably well - is that the setup defines the purpose. If I need different setup, that drives me to create a different test class. I don't have Astels' book in front of me, but I do recommend it for its explanation of this, much better than I can do here. My practice is not as absolute as maybe my remarks suggested - sometimes my test classes will include cases with different (inline) setup. But when I see too much of that, or duplicated setup in multiple tests - that's the code asking me to sprout a new test class. Or a utility function, depending. On what, I can't really say.

Well - the tests are never far from the class (same package, suffixed .test), and Eclipse lets me find references. So I never have a problem finding the test class. Regarding naming - take Range, for example. I might have RangeTest, covering contains(), isEmpty(), encompass(), which setup includes a single Range, or a few Ranges for which the functions can be evaluated, plus RangeInteractionTest, which tests intersects(), intersection(), and other - well, Range Interactions. I'd have to look at my code to find real examples; no time right now. But that should give you an idea. Really, though - check out the book.

============

from Dave Astels' Test-Driven Development: A Practical Guide, pp 74-76: Let's begin by considering TestCase. It is used to group related tests together. But what does "related" mean? It is often misunderstood to mean all tests for a specific class or specific group of related classes. This misunderstanding is reinforced by some of the IDE plug-ins that will generate a TestCase for a specified class, creating a test method for each method in the target class. These test creation facilities are overly simplistic at best, and misleading at worst. They reinforce the view that you should have a TestCase for each class being tested, and a test for each method in those classes. But that approach has nothing to do with TDD, so we won't discuss it further. This structural correspondence of tests misses the point. You should write tests for behaviors, not methods. A test method should test a single behavior... ...TestCase is a mechanism to allow _fixture_ reuse. Each TestCase subclass represents a fixture, and contains a group of tests that run in the context of that fixture. A fixture is the set of preconditions and assumptions with which a test is run. It is the runtime context for the test, embodied in the instance variables of the TestCase, the code in the setUp() method, and any variables and setup code local to the test method... ...Instead of using TestCase to group tests for a given class, try thinking about it as a way to group tests that need to be set up in exactly the same way. A measure of how well your TestCase is mapping to the requirements of a single fixture is how uniformly that fixture (as described by the setUp() method) is used by all of the test methods. Whenever you discover that your setUp() method contains code for some of your test methods, and different code for other test methods, consider it a smell that indicates that you should refactor the TestCase into two or more TestCases. Once you get the hang of defining TestCases this narrowly, you will find that they are easier to understand and maintain.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值