Is Design Dead?[中英]

Is Design Dead?

Martin Fowler
Chief Scientist, ThoughtWorks

For many that come briefly into contact with Extreme Programming, it seems that XP calls for the death of software design. Not just is much design activity ridiculed as "Big Up Front Design", but such design techniques as the UML, flexible frameworks, and even patterns are de-emphasized or downright ignored. In fact XP involves a lot of design, but does it in a different way than established software processes. XP has rejuvenated the notion of evolutionary design with practices that allow evolution to become a viable design strategy. It also provides new challenges and skills as designers need to learn how to do a simple design, how to use refactoring to keep a design clean, and how to use patterns in an evolutionary style.

(This paper was written for my keynote at XP 2000 conference and it's original form was published as part of the proceedings.)

Extreme Programming (XP) challenges many of the common assumptions about software development. Of these one of the most controversial is its rejection of significant effort in up-front design, in favor of a more evolutionary approach. To its detractors this is a return to "code and fix" development - usually derided as hacking. To its fans it is often seen as a rejection of design techniques (such as the UML), principles and patterns. Don't worry about design, if you listen to your code a good design will appear.

I find myself at the center of this argument. Much of my career has involved graphical design languages - the Unified Modeling Language (UML) and its forerunners - and in patterns. Indeed I've written books on both the UML and patterns. Does my embrace of XP mean I recant all of what I've written on these subjects, cleansing my mind of all such counter-revolutionary notions?

Well I'm not going to expect that I can leave you dangling on the hook of dramatic tension. The short answer is no. The long answer is the rest of paper.

Planned and Evolutionary Design

For this paper I'm going to describe two styles how design is done in software development. Perhaps the most common is evolutionary design. Essentially evolutionary design means that the design of the system grows as the system is implemented. Design is part of the programming processes and as the program evolves the design changes.

In its common usage, evolutionary design is a disaster. The design ends up being the aggregation of a bunch of ad-hoc tactical decisions, each of which makes the code harder to alter. In many ways you might argue this is no design, certainly it usually leads to a poor design. As Kent puts it, design is there to enable you to keep changing the software easily in the long term. As design deteriorates, so does your ability to make changes effectively. You have the state of software entropy, over time the design gets worse and worse. Not only does this make the software harder to change, it also makes bugs both easier to breed and harder to find and safely kill. This is the "code and fix" nightmare, where the bugs become exponentially more expensive to fix as the project goes on.

Planned Design is a counter to this, and contains a notion born from other branches of engineering. If you want to build a doghouse, you can just get some wood together and get a rough shape. However if you want to build a skyscraper, you can't work that way - it'll just collapse before you even get half way up. So you begin with engineering drawings, done in an engineering office like the one my wife works at in downtown Boston. As she does the design she figures out all the issues, partly by mathematical analysis, but mostly by using building codes. Building codes are rules about how you design structures based on experience of what works (and some underlying math). Once the design is done, then her engineering company can hand the design off to another company that builds it.

Planned design in software should work the same way. Designers think out the big issues in advance. They don't need to write code because they aren't building the software, they are designing it. So they can use a design technique like the UML that gets away from some of the details of programming and allows the designers to work at a more abstract level. Once the design is done they can hand it off to a separate group (or even a separate company) to build. Since the designers are thinking on a larger scale, they can avoid the series of tactical decisions that lead to software entropy. The programmers can follow the direction of the design and, providing they follow the design, have a well built system

Now the planned design approach has been around since the 70s, and lots of people have used it. It is better in many ways than code and fix evolutionary design. But it has some faults. The first fault is that it's impossible to think through all the issues that you need to deal with when you are programming. So it's inevitable that when programming you will find things that question the design. However if the designers are done, moved onto another project, what happens? The programmers start coding around the design and entropy sets in. Even if the designer isn't gone, it takes time to sort out the design issues, change the drawings, and then alter the code. There's usually a quicker fix and time pressure. Hence entropy (again).

Furthermore there's often a cultural problem. Designers are made designers due to skill and experience, but they are so busy working on designs they don't get much time to code any more. However the tools and materials of software development change at a rapid rate. When you no longer code not just can you miss out on changes that occur with this technological flux, you also lose the respect of those who do code.

This tension between builders and designers happens in building too, but it's more intense in software. It's intense because there is a key difference. In building there is a clearer division in skills between those who design and those who build, but in software that's less the case. Any programmer working in high design environments needs to be very skilled. Skilled enough to question the designer's designs, especially when the designer is less knowledgeable about the day to day realities of the development platform.

Now these issues could be fixed. Maybe we can deal with the human tension. Maybe we can get designers skillful enough to deal with most issues and have a process disciplined enough to change the drawings. There's still another problem: changing requirements. Changing requirements are the number one big issue that causes headaches in software projects that I run into.

One way to deal with changing requirements is to build flexibility into the design so that you can easily change it as the requirements change. However this requires insight into what kind of changes you expect. A design can be planned to deal with areas of volatility, but while that will help for foreseen requirements changes, it won't help (and can hurt) for unforeseen changes. So you have to understand the requirements well enough to separate the volatile areas, and my observation is that this is very hard.

Now some of these requirements problems are due to not understanding requirements clearly enough. So a lot of people focus on requirements engineering processes to get better requirements in the hope that this will prevent the need to change the design later on. But even this direction is one that may not lead to a cure. Many unforeseen requirements changes occur due to changes in the business. Those can't be prevented, however careful your requirements engineering process.

So all this makes planned design sound impossible. Certainly they are big challenges. But I'm not inclined to claim that planned design is worse than evolutionary design as it is most commonly practiced in a "code and fix" manner. Indeed I prefer planned design to "code and fix". However I'm aware of the problems of planned design and am seeking a new direction.

The Enabling Practices of XP

XP is controversial for many reasons, but one of the key red flags in XP is that it advocates evolutionary design rather than planned design. As we know, evolutionary design can't possibly work due to ad hoc design decisions and software entropy.

At the core of understanding this argument is the software change curve. The change curve says that as the project runs, it becomes exponentially more expensive to make changes. The change curve is usually expressed in terms of phases "a change made in analysis for $1 would cost thousands to fix in production". This is ironic as most projects still work in an ad-hoc process that doesn't have an analysis phase, but the exponentiations is still there. The exponential change curve means that evolutionary design cannot possibly work. It also conveys why planned design must be done carefully because any mistakes in planned design face the same exponentiation.

The fundamental assumption underlying XP is that it is possible to flatten the change curve enough to make evolutionary design work. This flattening is both enabled by XP and exploited by XP. This is part of the coupling of the XP practices: specifically you can't do those parts of XP that exploit the flattened curve without doing those things that enable the flattening. This is a common source of the controversy over XP. Many people criticize the exploitation without understanding the enabling. Often the criticisms stem from critics' own experience where they didn't do the enabling practices that allow the exploiting practices to work. As a result they got burned and when they see XP they remember the fire.

There are many parts to the enabling practices. At the core are the practices of Testing, and Continuous Integration. Without the safety provided by testing the rest of XP would be impossible. Continuous Integration is necessary to keep the team in sync, so that you can make a change and not be worried about integrating it with other people. Together these practices can have a big effect on the change curve. I was reminded of this again here at ThoughtWorks. Introducing testing and continuous integration had a marked improvement on the development effort. Certainly enough to seriously question the XP assertion that you need all the practices to get a big improvement.

Refactoring has a similar effect. People who refactor their code in the disciplined manner suggested by XP find a significant difference in their effectiveness compared to doing looser, more ad-hoc restructuring. That was certainly my experience once Kent had taught me to refactor properly. After all, only such a strong change would have motivated me to write a whole book about it.

Jim Highsmith, in his excellent summary of XP, uses the analogy of a set of scales. In one tray is planned design, the other is refactoring. In more traditional approaches planned design dominates because the assumption is that you can't change your mind later. As the cost of change lowers then you can do more of your design later as refactoring. Planned design does not go away completely, but there is now a balance of two design approaches to work with. For me it feels like that before refactoring I was doing all my design one-handed.

These enabling practices of continuous integration, testing, and refactoring, provide a new environment that makes evolutionary design plausible. However one thing we haven't yet figured out is where the balance point is. I'm sure that, despite the outside impression, XP isn't just test, code, and refactor. There is room for designing before coding. Some of this is before there is any coding, most of it occurs in the iterations before coding for a particular task. But there is a new balance between up-front design and refactoring.

The Value of Simplicity

Two of the greatest rallying cries in XP are the slogans "Do the Simplest Thing that Could Possibly Work" and "You Aren't Going to Need It" (known as YAGNI). Both are manifestations of the XP practice of Simple Design.

The way YAGNI is usually described, it says that you shouldn't add any code today which will only be used by feature that is needed tomorrow. On the face of it this sounds simple. The issue comes with such things as frameworks, reusable components, and flexible design. Such things are complicated to build. You pay an extra up-front cost to build them, in the expectation that you will gain back that cost later. This idea of building flexibility up-front is seen as a key part of effective software design.

However XP's advice is that you not build flexible components and frameworks for the first case that needs that functionality. Let these structures grow as they are needed. If I want a Money class today that handles addition but not multiplication then I build only addition into the Money class. Even if I'm sure I'll need multiplication in the next iteration, and understand how to do it easily, and think it'll be really quick to do, I'll still leave it till that next iteration.

One reason for this is economic. If I have to do any work that's only used for a feature that's needed tomorrow, that means I lose effort from features that need to be done for this iteration. The release plan says what needs to be worked on now, working on other things in the future is contrary to the developers agreement with the customer. There is a risk that this iteration's stories might not get done. Even if this iteration's stories are not at risk it's up to the customer to decide what extra work should be done - and that might still not involve multiplication.

This economic disincentive is compounded by the chance that we may not get it right. However certain we may be about how this function works, we can still get it wrong - especially since we don't have detailed requirements yet. Working on the wrong solution early is even more wasteful than working on the right solution early. And the XPerts generally believe that we are much more likely to be wrong than right (and I agree with that sentiment.)

The second reason for simple design is that a complex design is more difficult to understand than a simple design. Therefore any modification of the system is made harder by added complexity. This adds a cost during the period between when the more complicated design was added and when it was needed.

Now this advice strikes a lot of people as nonsense, and they are right to think that. Right providing that you imagine the usual development world where the enabling practices of XP aren't in place. However when the balance between planned and evolutionary design alters, then YAGNI becomes good practice (and only then).

So to summarize. You don't want to spend effort adding new capability that won't be needed until a future iteration. And even if the cost is zero, you still don't want to it because it increases the cost of modification even if it costs nothing to put in. However you can only sensibly behave this way when you are using XP, or a similar technique that lowers the cost of change.

What on Earth is Simplicity Anyway

So we want our code to be as simple as possible. That doesn't sound like that's too hard to argue for, after all who wants to be complicated? But of course this begs the question "what is simple?"

In XPE Kent gives four criteria for a simple system. In order (most important first):

  • Runs all the Tests
  • Reveals all the intention
  • No duplication
  • Fewest number of classes or methods

Running all the tests is a pretty simple criterion. No duplication is also pretty straightforward, although a lot of developers need guidance on how to achieve it. The tricky one has to do with revealing the intention. What exactly does that mean?

The basic value here is clarity of code. XP places a high value on code that is easily read. In XP "clever code" is a term of abuse. But some people's intention revealing code is another's cleverness.

In his XP 2000 paper, Josh Kerievsky points out a good example of this. He looks at possibly the most public XP code of all - JUnit. JUnit uses decorators to add optional functionality to test cases, such things as concurrency synchronization and batch set up code. By separating out this code into decorators it allows the general code to be clearer than it otherwise would be.

But you have to ask yourself if the resulting code is really simple. For me it is, but then I'm familiar with the Decorator pattern. But for many that aren't it's quite complicated. Similarly JUnit uses pluggable methods which I've noticed most people initially find anything but clear. So might we conclude that JUnit's design is simpler for experienced designers but more complicated for less experienced people?

I think that the focus on eliminating duplication, both with XP's "Once and Only Once" and the Pragmatic Programmer's DRY (Don't Repeat Yourself) is one of those obvious and wonderfully powerful pieces of good advice. Just following that alone can take you a long way. But it isn't everything, and simplicity is still a complicated thing to find.

Recently I was involved in doing something that may well be over-designed. It got refactored and some of the flexibility was removed. But as one of the developers said "it's easier to refactor over-design than it is to refactor no design." It's best to be a little simpler than you need to be, but it isn't a disaster to be a little more complex.

The best advice I heard on all this came from Uncle Bob (Robert Martin). His advice was not to get too hung up about what the simplest design is. After all you can, should, and will refactor it later. In the end the willingness to refactor is much more important than knowing what the simplest thing is right away.

Does Refactoring Violate YAGNI?

This topic came up on the XP mailing list recently, and it's worth bringing out as we look at the role of design in XP.

Basically the question starts with the point that refactoring takes time but does not add function. Since the point of YAGNI is that you are supposed to design for the present not for the future, is this a violation?

The point of YAGNI is that you don't add complexity that isn't needed for the current stories. This is part of the practice of simple design. Refactoring is needed to keep the design as simple as you can, so you should refactor whenever you realize you can make things simpler.

Simple design both exploits XP practices and is also an enabling practice. Only if you have testing, continuous integration, and refactoring can you practice simple design effectively. But at the same time keeping the design simple is essential to keeping the change curve flat. Any unneeded complexity makes a system harder to change in all directions except the one you anticipate with the complex flexibility you put in. However people aren't good at anticipating, so it's best to strive for simplicity. However people won't get the simplest thing first time, so you need to refactor in order get closer to the goal.

Patterns and XP

The JUnit example leads me inevitably into bringing up patterns. The relationship between patterns and XP is interesting, and it's a common question. Joshua Kerievsky argues that patterns are under-emphasized in XP and he makes the argument eloquently, so I don't want to repeat that. But it's worth bearing in mind that for many people patterns seem in conflict to XP.

The essence of this argument is that patterns are often over-used. The world is full of the legendary programmer, fresh off his first reading of GOF who includes sixteen patterns in 32 lines of code. I remember one evening, fueled by a very nice single malt, running through with Kent a paper to be called "Not Design Patterns: 23 cheap tricks" We were thinking of such things as use an if statement rather than a strategy. The joke had a point, patterns are often overused, but that doesn't make them a bad idea. The question is how you use them.

One theory of this is that the forces of simple design will lead you into the patterns. Many refactorings do this explicitly, but even without them by following the rules of simple design you will come up with the patterns even if you don't know them already. This may be true, but is it really the best way of doing it? Surely it's better if you know roughly where you're going and have a book that can help you through the issues instead of having to invent it all yourself. I certainly still reach for GOF whenever I feel a pattern coming on. For me effective design argues that we need to know the price of a pattern is worth paying - that's its own skill. Similarly, as Joshua suggests, we need to be more familiar about how to ease into a pattern gradually. In this regard XP treats the way we use patterns differently to the way some people use them, but certainly doesn't remove their value.

But reading some of the mailing lists I get the distinct sense that many people see XP as discouraging patterns, despite the irony that most of the proponents of XP were leaders of the patterns movement too. Is this because they have seen beyond patterns, or because patterns are so embedded in their thinking that they no longer realize it? I don't know the answers for others, but for me patterns are still vitally important. XP may be a process for development, but patterns are a backbone of design knowledge, knowledge that is valuable whatever your process may be. Different processes may use patterns in different ways. XP emphasizes both not using a pattern until it's needed and evolving your way into a pattern via a simple implementation. But patterns are still a key piece of knowledge to acquire.

My advice to XPers using patterns would be

  • Invest time in learning about patterns
  • Concentrate on when to apply the pattern (not too early)
  • Concentrate on how to implement the pattern in its simplest form first, then add complexity later.
  • If you put a pattern in, and later realize that it isn't pulling its weight - don't be afraid to take it out again.

I think XP should emphasize learning about patterns more. I'm not sure how I would fit that into XP's practices, but I'm sure Kent can come up with a way.

Growing an Architecture

What do we mean by a software architecture? To me the term architecture conveys a notion of the core elements of the system, the pieces that are difficult to change. A foundation on which the rest must be built.

What role does an architecture play when you are using evolutionary design? Again XPs critics state that XP ignores architecture, that XP's route is to go to code fast and trust that refactoring that will solve all design issues. Interestingly they are right, and that may well be weakness. Certainly the most aggressive XPers - Kent Beck, Ron Jeffries, and Bob Martin - are putting more and more energy into avoiding any up front architectural design. Don't put in a database until you really know you'll need it. Work with files first and refactor the database in during a later iteration.

I'm known for being a cowardly XPer, and as such I have to disagree. I think there is a role for a broad starting point architecture. Such things as stating early on how to layer the application, how you'll interact with the database (if you need one), what approach to use to handle the web server.

Essentially I think many of these areas are patterns that we've learned over the years. As your knowledge of patterns grows, you should have a reasonable first take at how to use them. However the key difference is that these early architectural decisions aren't expected to be set in stone, or rather the team knows that they may err in their early decisions, and should have the courage to fix them. Others have told the story of one project that, close to deployment, decided it didn't need EJB anymore and removed it from their system. It was a sizeable refactoring, it was done late, but the enabling practices made it not just possible, but worthwhile.

How would this have worked the other way round. If you decided not to use EJB, would it be harder to add it later? Should you thus never start with EJB until you have tried things without and found it lacking? That's a question that involves many factors. Certainly working without a complex component increases simplicity and makes things go faster. However sometimes it's easier to rip out something like that than it is to put it in.

So my advice is to begin by assessing what the likely architecture is. If you see a large amount of data with multiple users, go ahead and use a database from day 1. If you see complex business logic, put in a domain model. However in deference to the gods of YAGNI, when in doubt err on the side of simplicity. Also be ready to simplify your architecture as soon as you see that part of the architecture isn't adding anything.

UML and XP

Of all the questions I get about my involvement with XP one of the biggest revolves around my association with the UML. Aren't the two incompatible?

There are a number of points of incompatibility. Certainly XP de-emphasizes diagrams to a great extent. Although the official position is along the lines of "use them if they are useful", there is a strong subtext of "real XPers don't do diagrams". This is reinforced by the fact that people like Kent aren't at all comfortable with diagrams, indeed I've never seen Kent voluntarily draw a software diagram in any fixed notation

I think the issue comes from two separate causes. One is the fact that some people find software diagrams helpful and some people don't. The danger is that those who do think that those who don't should do and vice-versa. Instead we should just accept that some people will use diagrams and some won't.

The other issue is that software diagrams tend to get associated with a heavyweight process. Such processes spend a lot of time drawing diagrams that don't help and can actually cause harm. So I think that people should be advised how to use diagrams well and avoid the traps, rather than the "only if you must (wimp)" message that usually comes out of the XPerts.

So here's my advice for using diagrams well.

First keep in mind what you're drawing the diagrams for. The primary value is communication. Effective communication means selecting important things and neglecting the less important. This selectivity is the key to using the UML well. Don't draw every class - only the important ones. For each class, don't show every attribute and operation - only the important ones. Don't draw sequence diagrams for all use cases and scenarios - only... you get the picture. A common problem with the common use of diagrams is that people try to make them comprehensive. The code is the best source of comprehensive information, as the code is the easiest thing to keep in sync with the code. For diagrams comprehensiveness is the enemy of comprehensibility.

A common use of diagrams is to explore a design before you start coding it. Often you get the impression that such activity is illegal in XP, but that's not true. Many people say that when you have a sticky task it's worth getting together to have a quick design session first. However when you do such sessions:

  • keep them short
  • don't try to address all the details (just the important ones)
  • treat the resulting design as a sketch, not as a final design

 

The last point is worth expanding. When you do some up-front design, you'll inevitably find that some aspects of the design are wrong, and you only discover this when coding. That's not a problem providing that you then change the design. The trouble comes when people think the design is done, and then don't take the knowledge they gained through the coding and run it back into the design.

Changing the design doesn't necessarily mean changing the diagrams. It's perfectly reasonable to draw diagrams that help you understand the design and then throw the diagrams away. Drawing them helped, and that is enough to make them worthwhile. They don't have to become permanent artifacts. The best UML diagrams are not artifacts.

A lot of XPers use CRC cards. That's not in conflict with UML. I use a mix of CRC and UML all the time, using whichever technique is most useful for the job at hand.

Another use of UML diagrams is on-going documentation. In its usual form this is a model residing on a case tool. The idea is that keeping this documentation helps people work on the system. In practice it often doesn't help at all.

  • it takes too long to keep the diagrams up to date, so they fall out of sync with the code
  • they are hidden in a CASE tool or a thick binder, so nobody looks at them

 

So the advice for on-going documentation runs from these observed problems:

  • Only use diagrams that you can keep up to date without noticeable pain
  • Put the diagrams where everyone can easily see them. I like to post them on a wall. Encourage people to edit the wall copy with a pen for simple changes.
  • Pay attention to whether people are using them, if not throw them away.

 

The last aspect of using UML is for documentation in a handover situation, such as when one group hands over to another. Here the XP point is that producing documentation is a story like any other, and thus its business value is determined by the customer. Again the UML is useful here, providing the diagrams are selective to help communication. Remember that the code is the repository of detailed information, the diagrams act to summarize and highlight important issues.

On Metaphor

Okay I might as well say it publicly - I still haven't got the hang of this metaphor thing. I saw it work, and work well on the C3 project, but it doesn't mean I have any idea how to do it, let alone how to explain how to do it.

The XP practice of Metaphor is built on Ward Cunninghams's approach of a system of names. The point is that you come up with a well known set of names that acts as a vocabulary to talk about the domain. This system of names plays into the way you name the classes and methods in the system

I've built a system of names by building a conceptual model of the domain. I've done this with the domain experts using UML or its predecessors. I've found you have to be careful doing this. You need to keep to a minimal simple set of notation, and you have to guard against letting any technical issues creeping into the model. But if you do this I've found that you can use this to build a vocabulary of the domain that the domain experts can understand and use to communicate with developers. The model doesn't match the class designs perfectly, but it's enough to give a common vocabulary to the whole domain.

Now I don't see any reason why this vocabulary can't be a metaphorical one, such as the C3 metaphor that turned payroll into a factory assembly line. But I also don't see why basing your system of names on the vocabulary of the domain is such a bad idea either. Nor am I inclined to abandon a technique that works well for me in getting the system of names.

Often people criticize XP on the basis that you do need at least some outline design of a system. XPers often respond with the answer "that's the metaphor". But I still don't think I've seen metaphor explained in a convincing manner. This is a real gap in XP, and one that the XPers need to sort out.

Do you wanna be an Architect when you grow up?

For much of the last decade, the term "software architect" has become popular. It's a term that is difficult personally for me to use. My wife is a structural engineer. The relationship between engineers and architects is ... interesting. My favorite was "architects are good for the three B's: bulbs, bushes, birds". The notion is that architects come up with all these pretty drawings, but it's the engineers who have to ensure that they actually can stand up. As a result I've avoided the term software architect, after all if my own wife can't treat me with professional respect what chance do I stand with anyone else?

In software, the term architect means many things. (In software any term means many things.) In general, however it conveys a certain gravitas, as in "I'm not just a mere programmer - I'm an architect". This may translate into "I'm an architect now - I'm too important to do any programming". The question then becomes one of whether separating yourself from the mundane programming effort is something you should do when you want to exercise technical leadership.

This question generates an enormous amount of emotion. I've seen people get very angry at the thought that they don't have a role any more as architects. "There is no place in XP for experienced architects" is often the cry I hear.

Much as in the role of design itself, I don't think it's the case that XP does not value experience or good design skills. Indeed many of the proponents of XP - Kent Beck, Bob Martin, and of course Ward Cunningham - are those from whom I have learned much about what design is about. However it does mean that their role changes from what a lot of people see as a role of technical leadership.

As an example, I'll cite one of our technical leaders at ThoughtWorks: Dave Rice. Dave has been through a few life-cycles and has assumed the unofficial mantle of technical lead on a fifty person project. His role as leader means spending a lot of time with all the programmers. He'll work with a programmer when they need help, he looks around to see who needs help. A significant sign is where he sits. As a long term ThoughtWorker, he could pretty well have any office he liked. He shared one for a while with Cara, the release manager. However in the last few months he moved out into the open bays where the programmers work (using the open "war room" style that XP favors.) This is important to him because this way he sees what's going on, and is available to lend a hand wherever it's needed.

Those who know XP will realize that I'm describing the explicit XP role of Coach. Indeed one of the several games with words that XP makes is that it calls the leading technical figure the "Coach". The meaning is clear: in XP technical leadership is shown by teaching the programmers and helping them make decisions. It's one that requires good people skills as well as good technical skills. Jack Bolles at XP 2000 commented that there is little room now for the lone master. Collaboration and teaching are keys to success.

At a conference dinner, Dave and I talked with a vocal opponent of XP. As we discussed what we did, the similarities in our approach were quite marked. We all liked adaptive, iterative development. Testing was important. So we were puzzled at the vehemence of his opposition. Then came his statement, along the lines of "the last thing I want is my programmers refactoring and monkeying around with the design". Now all was clear. The conceptual gulf was further explicated by Dave saying to me afterwards "if he doesn't trust his programmers why does he hire them?". In XP the most important thing the experienced developer can do is pass on as many skills as he can to the more junior developers. Instead of an architect who makes all the important decisions, you have a coach that teaches developers to make important decisions. As Ward Cunningham pointed out, by that he amplifies his skills, and adds more to a project than any lone hero can.

Reversibility

At XP 2002 Enrico Zaninotto gave a fascinating talk that discussed the tie-ins between agile methods and lean manufacturing. His view was that one of the key aspects of both approaches was that they tackled complexity by reducing the irreversibility in the process.

In this view one of the main source of complexity is the irreversibility of decisions. If you can easily change your decisions, this means it's less important to get them right - which makes your life much simpler. The consequence for evolutionary design is that designers need to think about how they can avoid irreversibility in their decisions. Rather than trying to get the right decision now, look for a way to either put off the decision until later (when you'll have more information) or make the decision in such a way that you'll be able to reverse it later on without too much difficulty.

This determination to support reversibility is one of the reasons that agile methods put a lot of emphases on source code control systems, and of putting everything into such a system. While this does not guarantee reversibility, particularly for longed-lived decisions, it does provide a foundation that gives confidence to a team, even if it's rarely used.

Designing for reversibility also implies a process that makes errors show up quickly. One of the values of iterative development is that the rapid iterations allow customers to see a system as it grows, and if a mistake is made in requirements it can be spotted and fixed before the cost of fixing becomes prohibitive. This same rapid spotting is also important for design. This means that you have to set things up so that potential problem areas are rapidly tested to see what issues arrive. It also means it's worth doing experiments to see how hard future changes can be, even if you don't actually make the real change now - effectively doing a throw-away prototype on a branch of the system. Several teams have reporting trying out a future change early in prototype mode to see how hard it would be.

The Will to Design

While I've concentrated a lot of technical practices in this article, one thing that's too easy to leave out is the human aspect.

In order to work, evolutionary design needs a force that drives it to converge. This force can only come from people - somebody on the team has to have the determination to ensure that the design quality stays high.

This will does not have to come from everyone (although it's nice if it does), usually just one or two people on the team take on the responsibility of keeping the design whole. This is one of the tasks that usually falls under the term 'architect'.

This responsibility means keeping a constant eye on the code base, looking to see if any areas of it are getting messy, and then taking rapid action to correct the problem before it gets out of control. The keeper of the design doesn't have to be the one who fixes it - but they do have to ensure that it gets fixed by somebody.

A lack of will to design seems to be a major reason why evolutionary design can fail. Even if people are familiar with the things I've talked about in this article, without that will design won't take place.

Things that are difficult to refactor in

Can we use refactoring to deal with all design decisions, or are there some issues that are so pervasive that they are difficult to add in later? At the moment, the XP orthodoxy is that all things are easy to add when you need them, so YAGNI always applies. I wonder if there are exceptions. A good example of something that is controversial to add later is internationalization. Is this something which is such a pain to add later that you should start with it right away?

I could readily imagine that there are some things that would fall into this category. However the reality is that we still have very little data. If you have to add something, like internationalization, in later you're very conscious of the effort it takes to do so. You're less conscious of the effort it would actually have taken, week after week, to put it in and maintain it before it was actually needed. Also you 're less conscious of the fact that you may well have got it wrong, and thus needed to do some refactoring anyway.

Part of the justification of YAGNI is that many of these potential needs end up not being needed, or at least not in the way you'd expect. The effort you'll save by not doing any of them is less than the effort required to refactor into the ones you do actually need.

Another issue to bear in mind in this is whether you really know how to do it. If you've done internationalization several times, then you'll know the patterns you need to employ. As such you're more likely to get it right. Adding anticipatory structures is probably better if you're in that position, than if you're new to the problem. So my advice would be that if you do know how to do it, you're in a position to judge the costs of doing it now to doing it later. However if you've not done it before, not just are you not able to assess the costs well enough, you're also less likely to do it well. In which case you should add it later. If you do add it then, and find it painful, you'll probably be better off than you would have been had you added it early. Your team is more experienced, you know the domain better, and you understand the requirements better. Often in this position you look back at how easy it would have been with 20/20 hindsight. It may have been much harder to add it earlier than you think.

This also ties into the question about the ordering of stories. In Planning XP, Kent and I openly indicated our disagreement. Kent is in favor of letting business value be the only factor in driving the ordering of the stories. After initial disagreement Ron Jeffries now agrees with this. I'm still unsure. I believe it is a balance between business value and technical risk. This would drive me to provide at least some internationalization early to mitigate this risk. However this is only true if internationalization was needed for the first release. Getting to a release as fast as possible is vitally important. Any additional complexity is worth doing after that first release if it isn't needed for the first release. The power of shipped, running code is enormous. It focuses customer attention, grows credibility, and is a massive source of learning. Do everything you can to bring that date closer. Even if it is more effort to add something after the first release, it is better to release sooner.

With any new technique it's natural that its advocates are unsure of its boundary conditions. Most XPers have been told that evolutionary design is impossible for a certain problem, only to discover that it is indeed possible. That conquering of 'impossible' situations leads to a confidence that all such situations can be overcome. Of course you can't make such a generalization, but until the XP community hits the boundaries and fails, we can never be sure where these boundaries lie, and it's right to try and push beyond the potential boundaries that others may see.

(A recent article by Jim Shore discusses some situations, including internationalization, where potential boundaries turned out not to be barriers after all.)

Is Design Happening?

One of the difficulties of evolutionary design is that it's very hard to tell if design is actually happening. The danger of intermingling design with programming is that programming can happen without design - this is the situation where Evolutionary Design diverges and fails.

If you're in the development team, then you sense whether design is happening by the quality of the code base. If the code base is getting more complex and difficult to work with, there isn't enough design getting done. But sadly this is a subjective viewpoint. We don't have reliable metrics that can give us an objective view on design quality.

If this lack of visibility is hard for technical people, it's far more alarming for non-technical members of a team. If you're a manager or customer how can you tell if the software is well designed? It matters to you because poorly designed software will be more expensive to modify in the future. There's no easy answer to this, but here are a few hints.

  • Listen to the technical people. If they are complaining about the difficulty of making changes, then take such complaints seriously and give them time to fix things.
  • Keep an eye on how much code is being thrown away. A project that does healthy refactoring will be steadily deleting bad code. If nothing's getting deleted then it's almost certainly a sign that there isn't enough refactoring going on - which will lead to design degradation. However like any metric this can be abused, the opinion of good technical people trumps any metric, despite its subjectivity.

So is Design Dead?

Not by any means, but the nature of design has changed. XP design looks for the following skills

  • A constant desire to keep code as clear and simple as possible
  • Refactoring skills so you can confidently make improvements whenever you see the need.
  • A good knowledge of patterns: not just the solutions but also appreciating when to use them and how to evolve into them.
  • Designing with an eye to future changes, knowing that decisions taken now will have to be changed in the future.
  • Knowing how to communicate the design to the people who need to understand it, using code, diagrams and above all: conversation.

 

That's a fearsome selection of skills, but then being a good designer has always been tough. XP doesn't really make it any easier, at least not for me. But I think XP does give us a new way to think about effective design because it has made evolutionary design a plausible strategy again. And I'm a big fan of evolution - otherwise who knows what I might be? 

EOM

对于许多刚刚接触XP(Extreme Programming, 极限编程)的人来说,XP似乎宣告了软件设计的死亡。不但很多设计工作被奚落为“肥大的预先设计(Big Up Front Design)”,就连UML、柔性架构(flexible frameworks),甚至模式(pattern)这样的设计技术都被轻视或彻底地忽略。事实上,XP内含大量设计工作,只是设计方法与现有的软件流程不同。XP通过惯例让演进成为一种可行的设计方法,让演进式设计(evolutionary design)的概念焕发了青春。它也带来了新的挑战和技巧,设计者要去学会如何使设计精简,如何使用重构来保持设计的清晰,以及如何以演进的方式使用模式。

(本文是我在XP 2000大会上发言的主旨,原文发表在会议录中。)

极限编程(XP)质疑了许多关于软件开发的共识。其中最受争议的就是它抵制在预先设计(up-front design)上花费巨大精力,而是赞成一种更为演进式的方法。批评者说这是退回到了“编码加修正(code and fix)”式的开发——通常被嘲笑为hacking。而它的支持者则常常将它看作是对设计技术(如UML)、原理和模式的一种抵制,认为不用担心设计,只要你倾听代码,良好的设计就会出现。

我发现自己夹在了这一争论当中。我过去的工作很多都用到图形化的设计语言——统一建模语言(UML)及其前身,并使用模式。实际上我还曾经撰写过UML和模式方面的书籍。我倡导XP是否就意味着我要收回我在这些主题上所写的东西,要将这些反演进的概念统统从脑子中清除掉?

好了,我不想让你陷在这个疑问里。简短的回答是:不会。完整的回答就是本文余下的内容。

规划式与演进式设计

我将在本文中描述软件开发中的两种设计方式。或许最普通的就是演进式设计。演进式设计实质上意味着系统设计伴随着系统实现而成长。设计是编程过程的一部分,并且随着程序的演变,设计也在变化。

按其普通的用法,演进式设计带来的是灾难。设计最终成了一堆即兴决策的集合,每一决策都使代码变得更加难以修改。你可能用各种方法争辩说这根本没有设计,当然会经常导致糟糕的设计。正如Kent所言,设计是为了使你能在长期一段时间内可以轻松地不断变更软件。随着设计的恶化,你进行有效更改的能力也会下降。你遇到了软件失序(entropy)状态,随着时间的推移,设计变得越来越糟。这不仅使软件更难修改,而且缺陷(bug)也更容易滋生,且缺陷更难发现和安全清除。这就是“编码加修正”的噩梦,此时随着项目的进展,修正缺陷的代价会呈指数级增长。

规划式设计(planned design)与之相反,并且包含了一种产生于其它工程学科的概念。如果你要搭个狗窝,你只要把木头拼成一个粗略的形状即可。但是如果你要建一幢摩天大楼,你就不能那样做——恐怕盖不到一半大楼就垮了。你要从工程制图开始,由一家工程公司完成设计,就像我太太工作的那家在波士顿市区的公司。她在设计的同时会考虑到所有问题,部分是通过数学分析,但大部分是借助建筑规范。建筑规范是关于如何设计建筑结构的准则,它建立在成功经验(和一些基础数学)之上。一旦设计完成,她的工程公司就把设计交给另一家公司来建造大楼。

软件业中的规划式设计应该也是如此。设计师要事先仔细考虑重大问题。他们不需要编写代码,因为他们并不是在构建软件,他们是在设计它。于是他们可以利用像UML那样的设计技术从编码的细节中脱离出来,而在一个更抽象的层次上工作。一旦设计完成,他们可以把它交给另一个小组(或者甚至是另一家公司)去构建。因为设计师考虑的范围更广,他们能够避免引发软件失序的诸多决策。程序员可以遵从设计的指导,并且如果他们遵从设计,就会成功地构建出系统。

规划式设计方法从70年代起就有了,而且许多人用过它。它在很多方面比“编码加修正”的演进式设计好。但它有一些缺点。首先是不可能把编程时需要处理的所有问题都统统考虑到。因此不可避免地会在编程时发现有对设计产生质疑的地方。但是,如果设计师完成工作后已被调往另一个项目,会怎样?如果程序员迁就设计开始编码,就会引入失序。即使设计师未走,找出设计问题,改变设计图,然后调整代码,这些都要花时间。经常是面临时间压力,只能急促地修正一下。这仍(又)会失序。

此外,常常还有文化上的问题。设计师是由技术和经验所造就的,可是他们忙于从事设计,再也没有太多时间来编写代码了。然而,软件开发的工具和材料以飞快的速度变化着。当你不再编码,你不仅错失了技术变迁所出现的变化,还会失去编码人员对你的尊重。

这种建造者和设计者之间的紧张关系也出现在建筑业中,但在软件业更为强烈。之所以强烈是因为有一个关键性的差异。在建筑业,设计人员和建造人员的技能有着更为清晰的分别,但在软件业则分不太清楚。任何在高级设计环境中工作的程序员都必须技能娴熟,以至于会对设计师的设计提出质疑,尤其当设计师对日新月异的开发平台越来越不熟悉时更是如此。

即使能够解决这些问题:或许我们可以处理人与人之间的紧张关系,或许我们可以提高设计师的技能来处理大多数的问题,并且制订出一个规范流程来变更设计图。仍然还有另外一个问题:需求变更。需求变更是我所经历的软件项目中最令人头痛的一大问题。

要处理变化的需求,办法之一是在设计中植入灵活性,以便能随需求的变化轻松地变更设计。然而这要求你具有预料变化的见识力。虽然可以规划一个设计,使之能够处理一定范围内的易变性,但它只对可预见的需求变更有所帮助,对于意料之外的变更不会有帮助(甚至有害)。因此你必须充分理解需求,以区分易变的部分,我看这是非常困难的。

一些需求问题是由于对需求的理解不够清楚。所以许多人专注于需求工程的流程,以此获得更准确的需求,希望这样能防止以后更改设计。但是这个方向并非解决之道。很多意料之外的需求变更起缘于业务的变化。那是不可避免的,无论你需求工程的流程多么仔细。

上述所有问题使得规划式设计像是不可能实现的。它们确实是巨大的挑战,但是我不认为规划式设计比普遍以“编码加修正”方式运作的演进式设计差。实际上,比起“编码加修正”,我更喜欢规划式设计。然而,我知道规划式设计存在的问题,并且我正在寻找一条新的出路。

XP使能性惯例

XP因为很多原因而备受争议,其中最激烈的一条是,它倡导演进式设计而不是规划式设计。我们知道,演进式设计由于即兴的设计决策和软件开发失序而不可能行得通。

理解这个争议的关键在于软件变更曲线(software change curve)。变更曲线表明,随着项目的进展,变更的代价呈指数增长。变更曲线通常表述为:“分析阶段代价1美元的变更,在生产阶段修正则要花费数千美元”。虽然大多数项目流程随意,没有分析阶段也照样运转,但指数特性依然存在着。指数形变更曲线意味着演进式设计不可能行得通。它还揭示了为什么规划式设计必须要小心,因为在规划式设计中的任何错误面临着同样的指数特性。

XP背后的基本假定是,可以平坦化变更曲线,让演进式设计可行。这种平坦化即由XP使能,又被XP利用。这是XP惯例相互配合的结果:特别是,如果没有使曲线平坦化的XP惯例,就不会有利用平坦化曲线的XP惯例。这是对XP争议的普遍来源。很多人因为没理解使能部分所以批评利用部分。批评通常是源于批评家自身的体验,而他们没做使能性惯例好让利用性惯例可行。结果是他们焦头烂额,他们看到XP就想起大火。

使能性惯例分为许多部分,核心是测试(Testing)和持续集成(Continuous Integration)。如果没有测试提供保障,其余的XP将是不可能的。持续集成对于保持团队一致性非常必要。有了持续集成,你可以进行改动而不必担心和其他人的集成问题。这两个惯例一起就能对变更曲线产生很大影响。我在ThoughtWorks公司再次体会到了这一点,引入测试和持续集成显著地改善了开发效果,显著到会让人怀疑XP的断言:你需要全部惯例才能获得巨大改善。

重构(Refactoring)具有类似的效果。以XP建议的规范方式重构其代码的人们发现,与进行松散更即兴的调整相比,他们的成效显著不同。当然那也是自Kent教会我正确重构以来我的亲身体验。毕竟,只有这样强烈的变化才会激发我撰写了一整本关于重构的书。

Jim Highsmith,在他一篇精彩的文章XP概要(summary of XP)中,用天平打了个比方,一边是规划式设计,另一边是重构。在较传统的方法中规划式设计占优势,因为假设是你以后不会改变主意。而随着变更成本的降低,你越来越多地可以用重构的方式进行设计。规划式设计不会完全消失,只是现在能用两种设计方法的平衡方式来工作。我觉得,在使用重构以前,我所有的设计都是单手做的。

这些使能性惯例,持续集成、测试和重构,提供了一个令演进式设计可行的新环境。然而有一件事我们尚不能确定,平衡点在哪里?不管外界印象如何,我确信,XP不仅仅是测试、编码和重构。在编码之前有进行设计的需要,其中一些是在任何编码之前,大多数则发生在迭代过程中,特定任务编码之前。但是在预先设计和重构之间有一种新的平衡。

简单的价值

XP大声疾呼的两个口号是“做可能有效的最简单的东西(Do the Simplest Thing That Could Possibly Work)”和“你将不会需要它(YAGNI, You Aren't Going to Need It)”。两者都是XP简单设计(Simple Design)惯例的写照。

YAGNI常常表述为,你不应该在今天添加任何用于明天所需功能的代码。表面上,这听起来很简单,可问题会伴随着诸如框架、可重用部件以及柔性设计这类东西而出现。这类东西构建起来很复杂,你会耗费额外的前期成本去构建它们,期望着以后会收回成本。这种预先构建灵活性的思想被看作是有效软件设计的关键部分。

然而XP的建议是,你不应该为第一次的功能需求去构建柔性部件和框架,而应该让这些结构随着需要而成长。如果我今天需要一个处理加法但不用乘法的Money类,那我在Money类中就只构建加法。即使我能肯定下一阶段将会需要乘法,还知道如何轻松地实现,并且认为真的很快就能做好,我还是会把它留待下一阶段再做。

这样做的第一个理由是经济效益。如果我去做只用于明天所需功能的工作,那就意味着我没有把精 力放在现阶段应该完成的功能上。发布计划说明了现在应该干什么,从事其他将来的事情有悖于开发人员和客户之间的协议。这会有当前迭代故事可能完不成的风险。即使本次迭代故事不会有风险,也要由客户来决定完成哪项额外工作——可能仍然不会包含乘法。

伴随着这种经济学上的限制,我们还有可能理解错误。不管我们如何确信某项功能的运作,我们还是有可能理解错误——特别是当我们还没有得到详细需求时。提前做一件错事比提前做一件正确的事更浪费时间。而XP专家们通常相信我们更有可能做错而不是做对(我心有戚戚焉)。

简单设计的第二个理由是复杂的设计要比简单的设计更难理解。因为增加了复杂性,对系统的修改会更加困难。这就增加了从加入更复杂的设计到需要它这段时间内的成本。

YAGNI让许多人震惊,他们认为这是胡说,他们这样想是对的。只要你想象一下,一般的开发环境中XP使能性惯例并没有就位。然而,当规划式和演进式设计之间的平衡有了变化时(也只有此时),YAGNI就变成了良好的惯例。

因此结论是,不要浪费精力去增加直到将来的阶段才会需要的新功能。即使这样做的成本为零,也不要去做,因为就算加入这些功能不费成本,那也会增加修改的成本。然而,你也只有在使用XP或者类似的一种能降低变更成本的技术时,YAGNI才有效果。

那么究竟什么是简单

我们希望代码尽可能的简单,这听起来没有什么好争论的,毕竟,有谁会想要复杂呢?但问题是,什么是简单?

在《解析极限编程—拥抱变化》(XPE)一书中,Kent给出了四个评判简单系统的标准。依次为(最重要的在前):

  • 通过所有测试
  • 体现所有意图
  • 没有重复
  • 类或方法数量最少

通过所有测试是一项相当简单的标准。没有重复也相当直截了当,尽管许多开发人员需要指点才能做到。比较难的是体现所有意图。它的确切意思是什么呢?

这里的基本准则是代码的清晰性。XP对代码的可读性有着高度的重视。在XP中常用“机灵的代码(clever code)”一词,有些人体现意图的代码对其他人来说就是一种机灵。

Joshua Kerievsky在XP 2000上的论文中举了一个很好的例子。他查看了可能是大家最熟知的XP代码——JUnit。JUnit使用装饰模式给测试用例添加可选功能,比如并发同步和批量准备(batch set up)。通过把这些代码分离到装饰类中,使得通用代码更加清晰了。

但是你必须自问,结果代码是否确实简单。对我来说是,可我熟悉装饰模式。而对很多不熟悉的人来说,它相当复杂。类似地,对于JUnit使用的可插入方法(pluggable method),我注意到,大多数人刚开始都觉得不清晰。那我们可不可这样推断:JUnit的设计对有经验的设计师来说比较简单,但对经验少的人来说比较复杂?

我认为,专注于消除重复是明显的了不起的强有力的好建议之一,例如XP的“一次且仅有一次(Once and Only Once)”和Pragmatic Programmer的DRY(Don't Repeat Yourself, 不要重复自己)。只要遵循它就能走一长段路。但它不是全部,简单性仍然是一件复杂的事。

最近我参与了一个可能被过度设计的项目,它经过重构而去除了部分柔性设计,但一位开发人员说,“重构过度设计的代码要比重构没有设计的代码要容易的多”。因此,最好是比你所需要的还要再简单一点,但稍微复杂一点也不是什么严重的事。

这方面我所听到的最佳建议来自Bob大叔(Robert Martin)。他的建议是不要太在意什么是最简单的设计。毕竟你能够,也应该,也将会在以后进行重构。最后,重构的意愿比马上知道什么是最简单的要重要得多。

重构违背YAGNI吗?

这个话题最近出现在XP邮件列表中,而且值得拿出来讨论,以审视XP中设计的角色。

基本上这个问题的起因是,重构花费时间却没有增加功能。因为YAGNI的观点是你应该为现在而不是将来进行设计,这是一种违背吗?

YAGNI的观点是你不应该添加当前的故事所不需要的复杂性。这是简单设计惯例的一部分。为保持设计尽可能的简单,重构是必要的,无论何时,只要你认为你能做得更简单,你就应该重构。

简单设计既利用了XP惯例,也是一种使能性惯例。只有当你具有测试、持续集成和重构,你才能有效地实施简单设计。而同时,保持简单的设计又对保持变更曲线的平坦性来说是非常必要的。任何不必要的复杂性都会使系统更难调整,除非该调整正是你所预期的,你因此才加入了复杂的柔性设计。然而,人们并不擅长预料,因此最好还是力求简单。然而,人们也不能第一次就做到最简单,因此你需要重构来接近目标。

模式与XP

JUnit的例子让我不可避免地要提到模式。模式和XP之间的关系很有意思,也常常被问起。Joshua Kerievsky认为模式在XP中被轻视了,而且论据充分,我不想赘述了。但值得一提的是,对于很多人来说模式似乎是和XP相冲突的。

重点在于模式常常被过度使用。世上满是传奇性的程序员,初读《设计模式》(GOF)就能在32行代码中包含16种模式。我记得有天晚上,喝着酒,我和Kent讨论了一篇名为“不要设计模式:23条简便技巧(No Design Patterns: 23 cheap tricks)”的论文。我们当时认为这些技巧不过是使用一条if语句而不是一种策略。这个玩笑说明了一点,模式经常被过度使用,但那并不表示模式不好。问题在于你怎样运用。

对此有一种理论是,简单设计的力量会将你引向模式。很多重构明确地在这样做,但即使没有重构,只要遵照简单设计的规则,你也会遇到模式,即使你还不了解模式。这也许是真的,但这是最好的方式吗?可以肯定,如果你知道大致的方向,并且有本书能帮你解决问题,而不是全部由自己去创造,那将会更好。只要我觉得要用到模式了,我肯定还会拿起《设计模式》一书。在我看来,模式对于有效设计是物有所值——那是它特有的技能。类似地,正如Joshua所建议的,我们需要更加熟悉如何轻松地逐渐引入模式。在这点上,XP使用模式的方式与一般使用模式的方式有所不同,但这并没有抹煞它的价值。

但是,在阅读了一些邮件列表之后,我明显感到,很多人以为XP不鼓励使用模式,尽管大多数XP的倡导者同时也是模式运动的领导者。是因为他们看透了模式,还是因为模式融入了他们的思想,以至于他们不再意识到模式?其它人我不知道,但对我来说,模式仍然至关重要。XP是一种开发流程,而模式是设计的骨干知识,无论采用何种流程,这些知识都是重要的。不同的流程使用模式的方式可能不同。XP既强调直到需要时才使用模式,又强调通过简单的实现来逐渐演进到模式。但是模式仍然是必须掌握的一项关键知识。

我给XP人使用模式的建议是:

  • 花点时间学习模式
  • 注意应用模式的时机(别太早)
  • 注意先以最简单的形式实现模式,以后再增加复杂性。
  • 如果你用了一种模式,后来又觉得没有多大帮助——不用怕,再把它去掉。

我认为XP应该更强调模式的学习。我不知道模式如何融入XP惯例,不过我相信Kent会想出办法来的。

发展架构

软件架构是指什么呢?对我来说,架构一词是系统核心元素的概念,是系统中不易变更的那部分,是其它部分构建的基础。

当你使用演进式设计时,架构扮演着什么样的角色呢?XP的批评者又说了,XP忽视架构,XP路线是尽快地开始编码,并相信重构会解决所有设计问题。有趣的是,他们说对了,那可能是弱点。的确,最激进的XP人——Kent Beck, Ron Jeffries,及Bob Martin——竭力地避免预先的架构设计。在你真的要用到之前,不要加入数据库,先用文件来代替,在之后的迭代中通过重构来加入数据库。

我常被认为是一个胆小的XP人,这点我必须否认。我认为一个概括性的初始架构有它的用处所在,例如,应用如何分层,如何与数据库交互(如果你需要的话),网站服务器的使用方法等。

实际上,我认为很多方面我们已研究多年,形成了模式。随着你模式知识的增长,你应该有理由优先去使用它们。然而,关键性的区别在于这些初始架构的决定是可以更改的,团队早期的决定很有可能会错误,应该有勇气去修正它们。有人跟我讲过一个项目,在接近部署阶段时,突然决定不再需要EJB,然后就把它从系统中删除了。这是一个相当大的重构,不过后来还是完成了,XP使能性惯例不仅让事情变得可能,而且很值得去做。

如果以其它的方式来做这件事呢?如果你决定不采用EJB,那么将来会很难加入吗?那你是否应该开始就不用EJB,直到试过各种方法后发现缺少EJB时才加入?这个问题牵涉多种因素。不用复杂的组件肯定能增加简单性,从而做得更快。然而有时候剔除一些东西要比加入它容易得多。

所以我的建议是,从一开始就估计架构可能的样子。如果你看到多用户和大量的数据,那么一开始就使用数据库。如果你看到复杂的业务逻辑,那就套用领域模型(domain model)。然而,当怀疑是否偏离了简单性原则时,那就遵循YAGNI的精神。并且随时准备,一旦发现架构没有任何帮助时,就简化它。

UML与XP

在我投身于XP之后,仍保持着与UML的联系,对此存在一个很大的疑问:XP与UML不是不相容吗?

它们之间有一些不相容的地方。XP显然在很大程度上不重视图表。虽然其正式立场是“有用就用”,但是包含着强烈的潜台词“真正的XP人不画图”。像Kent这类人很不习惯图表,这样的事实强化了这种观点。我确实从没见过Kent自愿地使用某种标记法画过软件图。

我觉得这个问题有两个独立的原因。一个原因是,软件图对有些人有用,对有些人没用。危险的是,一方认定另一方应该按自己的想法去做。与此相反,我们应该接受有些人用图表而有些人不用的事实。

另一个原因是软件图往往和重量级的流程相关。这些流程花费了大量时间来画图,却没什么用,实际上还可能有害。因此我认为,应该教导人们如何用好图表,避免陷阱,而不只是通常的XP专家之词“仅在必须时 only if you must (wimp)”。

在此我提些关于善用图表的建议。

首先要记住你为什么而画图。其主要的价值是为了交流。有效的交流就是突出重要的东西,并忽略次要的。这种取舍是用好UML的关键。不要画所有的类——只需重要的。对于每个类,不要画出所有的属性和操作——只需重要的。不要为所有的用例和情节画时序图——只需重要的。一般使用图表的通病是,人们试图让其更为全面。代码是全面信息的最佳来源,因为代码是最容易保持与代码同步的东西。而对于图表来说,全面性与可理解性是敌对的。

图表通常的用途是在开始编码之前探讨设计。你常常会有这样的印象,这种行为在XP中是不合规定的,但事实并不是这样。很多人都说,当你遇到棘手的问题时,先召集一个简短的设计会议是值得的。然而当你进行这样的会议时应该:

  • 保持简短
  • 不要试图明确所有的细节(只挑重要的)
  • 把得到的设计当作是草案,而不是定案

 

最后一点值得展开说明。当采用预先设计时,你难免会发现某些设计上的错误,而且在编码时才会发现。只要你适时变更设计,那就不成问题。但当人们认为设计已经完成,而且不利用从编码中获得的知识进行设计的变更,麻烦就来了。

变更设计不一定要更改图表。最为合理的做法是,绘制图表来帮助你理解设计,然后就丢掉图表。图表有所帮助,那就足够了,值了,它们不必成为永久制品。即使最好的UML图也不是制品。

许多XP人使用CRC卡片,这与UML并不冲突。我一直在混合使用CRC和UML,哪个最适合手头的工作就选哪个。

UML图的另一用途是编制文档。它一般是case工具中的一个模块。它的想法是,维护文档能帮助人们构建系统。实际上它常常什么也帮不上。

  • 绘制图表太花费时间,无法保持最新,因此无法与代码保持同步
  • 它们被隐藏在CASE工具或厚厚的包装中,以至于没人会去看它们

 

针对这些问题,关于编制文档的建议是:

  • 仅使用没有明显痛苦就能保持更新的图表
  • 把图表放在所有人都能容易看到的地方。我喜欢贴在墙上。对于简单修改,鼓励大家用笔直接涂改墙上的副本。
  • 注意这些图是否有人在用,没用的就扔掉。

 

UML图的最后一种用途是编制移交工作时的文档,比如一个小组向另一小组移交工作。此时XP的观点是,生成文档也是一个任务故事,和其它故事一样,它的商业价值是由客户决定的。只要有选择性的图表对交流有所帮助,UML在这里也会有用。但请记住,代码才是细节信息的贮藏库,图表的作用是总结和突出重点。

关于隐喻

好吧,我不妨公开地承认——我尚未把握隐喻(metaphor)。我知道它有用,而且在C3项目中工作得很好,但这并不表示我知道怎么用它,更不用说要解释怎么用了。

XP惯例中的隐喻建立在Ward Cunningham的系统命名方案之上。其重点是,提出一个众所周知的名称集合,充当领域内交谈的词汇表。这个命名系统在你命名类和方法时发挥作用。

我曾通过构建领域的概念模型建立了一个命名系统。这是我使用UML和领域专家一起完成的。我发现做这件事必须小心,你需要保持概念集的最小化与简单性,而且你必须防备,别让任何技术问题钻入模型。但是如果你这么做了,我发现,你可以用它来建立领域词汇表,不仅领域专家能够理解,也能用它与开发者交流。这种模型不能和类的设计精确匹配,但是它足以给出整个领域的一个常用词汇表。

那么这个词汇表为什么不能是隐喻性的,比如C3中将薪水册比喻为工厂装配线?但我也不认为在领域词汇表的基础上建立命名系统是个坏主意。我也不会废弃我认为对于获取命名系统有效的技术。

人们常批评XP,认为你肯定需要,至少需要系统的某些概要设计。XP人则常常回复:“那就是隐喻”。但是我还是没看到对隐喻的令人信服的解释。这真的是XP的一个难点,有待XP人去解决。

你长大想成为一名架构师吗?

近十年来,“软件架构师”一词热门起来。这是一个就我个人而言难以接受的术语。我太太是建筑工程师。工程师和架构师之间的关系是……有趣的。有句话我喜欢:架构师擅长三个B:灯泡、矮树、鸟(bulbs, bushes, birds)。它的意思是,架构师描绘出了所有这些漂亮的画面,而必须保证它们真实地树立起来的却是工程师。所以我回避软件架构师一词,毕竟,如果连我的太太都不尊重我的职业,我又有什么机会期待他人的尊重呢?

在软件业,架构师一词代表很多含义。(软件业中任何术语都代表很多含义。)然而通常它传达了一种权势,就像是说:我不是一名单一程序员——我是一名架构师。这可以翻译为:我现在是一名架构师——我这么重要,不能做任何编程的事。那问题就成这样:当你执掌技术领导权,是否应该脱离日常编程工作?

这个问题产生了众多的情绪。我见过人们因为想到他们不再是架构师了而非常生气。我经常听到这样的埋怨:XP不需要经验丰富的架构师。

就设计本身来说,我不认为XP不重视设计经验或者好的设计技术。实际上,正是从很多XP拥护者——Kent Back,Bob Martin,当然还有Ward Cunningham——我学到了许多关于设计的知识。然而它确实意味着他们的角色变化了,许多人眼里的技术领导的角色变了。

我举一个我们ThoughtWorks的技术领导Dave Rice的例子。Dave曾经历多个完整的开发,并且在一个50人的项目中担任非正式的技术领导。他的领导角色意味着要花大量时间与程序员为伍。他会和需要帮助的程序员一起工作,他留意着看谁需要帮助。明显的标志是看他坐在哪里。作为ThoughtWorks的老员工,他完全可以拥有一个他自己喜欢的办公室。有段时间,他和发行经理Cara共用办公室。而在最近几个月,他搬了出来,搬到了程序员工作的开放式隔间,(并使用了XP喜欢的开放式“战争室”风格)。这对他很重要,因为这样他就可以知道发生了什么事,并在需要时能帮上一把。

熟悉XP的人会意识到我描述的正是XP中的教练(Coach)角色。的确,XP的文字游戏之一就是将技术带头人称为教练。意义很清楚:XP中技术领导体现为教导程序员并帮助他们作决定。这需要良好的人际关系和高超的技术水平。Jack Bolles在XP 2000上说:孤立的大师没有前途,合作和教导才是成功的关键。

在一个会议的晚宴上,我和Dave与一位声称反对XP的人谈话。当我们讨论到我们的工作,相当显明的是,我们的方法是类似的。我们都喜欢适应性的,迭代式的开发,都认为测试是重要的。所以我们对他猛烈反对XP感到困惑。后来他说了一句话,大意是:我最烦的是我的程序员们对设计进行重构和修改,现在全部清楚了。后来,Dave对我说:“如果他不信任他的程序员,又何必要雇佣他们呢?”,这进一步详细解说了观念上的隔阂。在XP中,有经验的开发者所能做的最重要的事就是尽量将技术传授给较初级的开发者。你有一名教练来指导开发者做出重要的决定,而不是由一个架构师来决定这一切。就像Ward Cunningham所指出的,那么做放大了他的技术,这对项目的好处大于一个孤胆英雄所能做的。

可逆性

在XP 2002大会上,Enrico Zaninotto做了一场精彩的演说,讨论了敏捷方法(agile methods)和精益生产(lean manufacturing)之间的关系。他的观点是,两种方法都有一个关键特征:它们都通过减少流程中的不可逆性来降低复杂度。

以此观点来看,复杂性的主要来源之一是决策的不可逆性。如果你能轻松地变更决策,这意味着决策的正确性并不太重要——这就简单多了。演进式设计的结果是,设计师需要考虑如何避免决策中的不可逆性。与其现在努力做出正确的决策,倒不如拖延到以后再做决策(当将来你有了更多的信息),或者做出一个可逆决策,让你以后不太困难就能逆转回来。

支持可逆性的决心是敏捷方法强调使用源码管理系统的原因之一,并因此将所有东西都放入该系统。虽然它不能保证可逆性,特别是对存在已久的决策,但它确实提供了一个基础,给团队可逆转的信心,即使很少使用逆转。

可逆性设计同时也是一个快速发现错误的过程。迭代式开发的价值之一是,快速的迭代允许客户在系统成长过程中看到系统,并且如果需求有误,可以在修复成本变得高不可攀之前定位和修复错误。这种快速定位对于设计来说也很重要。这就是说,你必须设置一些东西,以便能快速地测试可能潜在问题的部分系统,看看出现什么问题。这同时也意味着,值得做做实验来看看将来的变更会有多难,即使你实际上不是现在就要变更——做一个子系统的可抛弃原型就很有效。有些团队就是在原型模式下较早地试验将来的变更,来查看变更会有多难。

对设计的决心

本文以上内容我专注于许多技术上的惯例,而有一条差点被遗漏,就是人的因素。

演进式设计需要一种力量来驱动它收敛才能工作。这种力量只能来自于人——团队中的人必须有决心来确保设计的高质量。

这种决心不必每人都有(虽然如果这样会很妙),通常在团队中仅有一两个人对设计的完整性负责。这常常是着落在团队“架构师”身上的工作之一。

这一职责意味着要一直监视着代码,观察是否有混乱的代码,并在失去控制之前采取快速行动纠正问题。设计的监护者不必是修正的人——但是他们一定要确保有人来修正。

缺乏对设计的决心看来是演进式设计失败的主要原因。即使人们对我在本文中谈到的东西非常熟悉,没有那种决心,设计仍将无法成功。

难以重构的东西

我们能用重构来处理所有设计方面的决策吗?或者,是否有些问题因为涉及广泛而以后难以加入?目前来看,XP的正统学说是:任何东西都可以在需要时很轻松的加入,所以YAGNI总是能够适用。我想知道是否有例外。一个不错的引起争论的例子是软件国际化问题。这是不是以后加入会很痛苦,你应该一开始就立即加入?

我能很容易地想到一些会陷入这种境地的事情。然而现实是我们仍然了解得太少。如果你必须加入一些东西,如国际化,你可以在完成后统计出它所费工时。但你不太会意识到它实际上所费的工时,在它真正被需要之前,周复一周,集成和维护它的工时。同样你也不太会意识到你可能对它理解错误,因此需要重做。

为YAGNI辩护的部分理由是,很多潜在的需求到最后并不真的需要,或者至少不是你所预料的那种方式。对它们不做任何事就能省下力气,这比起通过重构来达到实际需要要省力。

另外一个要考虑的问题是,你是否真的知道如何去做?如果你已经做过多次国际化,那你就会知道要应用什么模式,因此你更有可能做好它。如果是那样,加入预期的结构,多半会比你初次处理这种问题要好。所以我的建议是,如果你知道如何去做,你就要判断现在做和以后做的成本。然而,如果你以前没做过,你不仅无法正确评估所需成本,你也不太可能把它做好。这种情形下,你应该以后再做。如果到时候你做起来觉得痛苦,这也比你早期就加入它所受痛苦要好。当你的小组更有经验,你更了解这一领域,你对需求理解更好时,这时你回头看看,会发现如果有2.0的背后眼,事情将会多么容易。如果提早加入,可能会比想象的还要难。

这也跟故事的优先次序问题密切相关。在Planning XP一书中,我和Kent公开表明了我们的意见分歧。Kent倾向于让业务价值成为影响故事次序的唯一因素。Ron Jeffries当初不同意,现在同意这种观点。而我仍不确定,我相信它是业务价值和技术风险的平衡作用。技术风险会驱使我提早加入国际化,以缓和这种风险。然而仅在第一次发布时就需要国际化时才这么做。尽快发布是绝对重要的。任何第一次发布中不需要的额外的复杂性,可在发布以后再做。发布后运行的代码有巨大的力量,它抓住了客户的注意力,增加了可信度,也是学习的重大源泉。要尽一切努力提早发布的日期。即使第一次发布后添加某些功能会更费力,还是越早发布越好。

对于任何新技术,它的拥护者不确定它的边界条件,这是自然的。大多数XP人被告知,演进式设计不可能解决某些问题,却发现实际上是可能的。征服了“不可能”的情况,导致确信所有情况都能克服。当然你不能做这样的归纳,但是除非XP社团碰到边界并失效,我们不能确信边界在哪里,我们应该去尝试,去超越别人眼里潜在的边界。

(Jim Shore近期的一篇文章中讨论了一些情形,包括国际化问题,证明潜在的边界并不是屏障。)

在进行设计吗?

演进式设计的难点之一是,很难分辨出是否真的在进行设计。设计与编码掺杂在一起的危险是,可以没有设计就进行编码——这是演进式设计发散和失败的情形。

如果你在开发小组中,你可以从代码的质量感觉到是否在进行设计。如果代码变得越来越复杂、维护越来越困难,这说明缺乏足够的设计。但悲哀的是,这是主观观点,我们没有可靠的量化标准来客观地衡量设计质量。

如果说设计质量缺乏可见性对于技术人员来说是个难题,那么对于团队中的非技术人员来说则更是让人惊恐。如果你是经理或者客户,你怎么分辨软件是良好设计的呢?这对你是重要的,因为设计糟糕的软件会使将来的更改变得昂贵。这很难回答,但是这里有一些提示:

  • 倾听技术人员的话。如果他们抱怨变更困难,那么请重视这个问题并且给他们时间来修正。
  • 留意有多少代码被抛弃。正常重构中的项目会稳定地删除不良代码。如果没有任何删除,这几乎确定是缺乏充分重构的标志——会导致设计退化。然而,像任何量化标准一样,这可能被滥用。尽管是主观性的,技术人员的看法应胜于任何量化标准。

那么设计死了吗?

绝对没有,但是设计的性质变了。XP的设计要求以下技巧:

  • 不断地渴望,渴望保持代码尽可能的清楚、简单
  • 重构技巧,使你自信地在需要时进行改进。
  • 良好的模式知识:不只是解决方案,更要知道该何时用,以及如何逐步引入。
  • 考虑到将来变更的设计,知道现在做出的决策可能将来会不得不改变。
  • 知道如何传达设计,将设计传达给需要理解它的人,通过代码、图表和最重要的:交谈。

 

以上挑出来的技巧看起来有点吓人,但是做出一个良好的设计一直都是困难的。XP没有让设计变得容易些,至少对我来说没有。但是我认为,XP确实给了我们有效设计的新思路,因为它让演进式设计再次成为一种可行的策略。我支持演进——否则谁知道我会成为什么呢?

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值