March 24, 2004
Summary: While Web applications have received the bulk of the interest over the last few years, improvements in the client mean that it's time to re-investigate client-side development. In this new column, .NET in the Real World, authored by Microsoft Regional Directors, Billy Hollis looks at smart clients and how you can use them to build applications today. (6 printed pages)
Once upon a time dinosaurs ruled the earth; they were called mainframes. If your hair isn't as gray as mine, you might not remember that world. Let me tell you a bit about how it worked. In that world, the big mainframe system did all the processing. It assembled a page of information and sent that page out to a device called a terminal. The most common type of terminal was called a 3270.
Terminals were rather stupid devices. They had no processing power to speak of. All a user could do with a terminal was fill in data fields and navigate around the screen.
When the user was done with a page of data, he or she pressed a button on the terminal and sent the page's information back to the mainframe. Then the mainframe took in the page's information and processed it, and got ready to send out a new page.
Then PCs came along, and then they got hooked to networks, and finally somebody said, "Hey, we can do better than that mainframe stuff. We can use our PCs' processing power to make the user interface more intelligent and remember lots of things about how the user needs to work. That would help the users do their tasks, and be a lot more productive."
Thus was a new type of system born. The best of these systems did a great job of distributing the work among the users' PCs and the big central machine, which was rechristened a server. These client-server systems allowed both PCs and large central systems to each do what it did best. Client-server displaced many of the mainframes, and people liked it much better. The users were productive and happy.
Then along came the Internet. At first it was just used to browse static, hyperlinked pages. But then some very bright young fellows said, "Hey, look what we can do! We can have an application run on this Internet. We'll have a large central Web server that will do all the processing. And it will send out pages to users, who will view the page in a browser, which will let them navigate around the page and enter some data. Then the users will press a button and send the page's information back to the big system, which will process it and send back a new page. Won't that be great!"
The browser didn't have much intelligence built into it. It was supposed to be a "thin client," meaning the PC wasn't supposed to do anything except run this browser. Thus PCs with more processing power than the mainframes of twenty years earlier were reduced to being pretty analogs of 3270 terminals.
Why did this happen? Why did we take this giant leap backwards? As with so many questions of the "Why is it done that way?" type, the answer is "money." With browser-based systems, we could for the first time allow users all over the world to access our systems, and that was a big step forward. Allowing a new user to access our system had essentially zero cost.
The alternative of a "rich client" or "smart client" required us to install some special software out on the user's system. This used to be easy, back in the days of DOS. We just copied it out there and it ran. But then COM came along—about the same time as the browser, in fact—and we learned the term DLL hell. So we gave up on getting our software to run on the user's machine, and just used the browser, because we couldn't afford to do anything else. After all, something beats nothing. An inferior alternative we can afford beats a superior alternative we can't afford.
But let's shift an underlying assumption. Let's assume deployment of software to a user's system becomes cheaper—perhaps very close to the zero cost of using a browser. What happens then?
Billy's First Law of Software Development states: Users outnumber programmers and tech support reps. Users are the reason our systems exist. If they learn they can get application software interfaces that are intelligent, and help them do their job better, how long do you think they are going to put up with clunky interfaces? How long will they be happy with pages built with a protocol that was not even designed for application interfaces, but for browsing hyperlinked pages? I believe that the answer is, "not very long."
Consider the possibilities. If a system has one thousand operators, and smart client software makes them just five percent more effective, then the cost savings can be huge. Assuming a loaded employee cost of $50,000, a five-percent productivity enhancement for one thousand operators is fifty full-time equivalents, which comes out to $2.5 million. And that doesn't even take into account lower training costs, lower error rates, less user frustration and stress, and other potential by-products of smart client software. (By the way, that five percent assumption above is quite conservative. I've seen productivity improvements in the ten- to twenty-percent range, depending on the task.)
If this analysis is correct, then how does that affect you decision makers in the world of software development and information technology? I think it leads to an interesting choice: You can be reactive or proactive about adapting smart client systems. And you can pay the price or reap the rewards of the strategy you choose.
On one hand, you can lay back and keep doing things in 1990's fashion, depending heavily on browser-based software. Depending on what industry you are in and how fast your organization moves on new technology, that strategy might not get you in trouble. But in any fast-moving industry, in which technology can confer a competitive advantage, such a strategy carries significant, career-affecting risk. Someday, a person who really calls the shots on technology in your organization may come to you and say, "Hey, this browser-based stuff is junk. The users hate it, and it slows them down. I've seen other systems that have smart user interfaces that let them do their jobs better and faster, and save their companies tons on money. I want some of that. Get it for me—and we need it yesterday." Or worse, they might start the conversation with, "We've decided to make a change in the management of your department..."
If this doesn't sound very attractive, you can adopt a proactive strategy. You can start by deciding whether that new application you need to write for 500 users should have an old-fashioned browser interface, or a hot, sexy, new intelligent user interface. You can be the hero that creates software that improves the productivity of hundreds or thousands of users, maybe saving millions of dollars. You can have your users go, "Wow! I didn't know you could have Internet applications this nice! You're a genius!"
If you like the "genius" choice, then you need to learn the ins and outs of smart client development. There are four major pieces of technology needed to make a smart client architecture work:
Right now, the Microsoft? .NET Framework is the clear choice of platform to attain all of these capabilities. It is not the only choice as a platform for distributed smart client systems, but it obviously has a large lead over alternatives such as Java. Let's look at the capabilities of .NET in each of the areas above to see why.
Forms on the Client
The .NET Framework includes one of the most advanced forms engines available. It's called Microsoft? Windows? Forms. It's completely object oriented, has a wide variety of visual controls (both from Microsoft and third parties) available for it, and the Microsoft? Visual Studio? development environment includes a great visual designer to quickly create Windows Forms interfaces.
Using the event-driven paradigm of Windows Forms, plus the ability to store as much state information locally as necessary, user interfaces can be much more responsive. Heads-down data entry tasks can be done with interfaces specially designed for that purpose. Sophisticated data validation can help users get the data right the first time, instead of needing to reload pages just to see if the data is right. (Yes, yes, I know. Browsers can do some of that too. But it takes so much programming to make them do it right that the effort is seldom worth it.)
Windows Forms interfaces can include tutorial windows, tooltips, translucent help forms, dynamic controls that adapt their behavior to the user, and much more. And they are still typically faster to develop than equivalent Web pages.
Microsoft is investing heavily in this area of technology. The next generation of UI technology has already been announced. The code name for the project is "Avalon," and it's part of the next release of Microsoft? Windows?, code-named "Longhorn," currently under development. Avalon adds even more capabilities to make user interfaces responsive, including some new paradigms for user interaction that would be virtually impossible in a browser.
To take advantage of these technologies, developers will need to learn more about user interface design principles. If they've only done browser-based programming, they may not realize how much there is to learn about UI design. They can't make your systems include responsive, intelligent interfaces, until they know how to write responsive, intelligent interfaces.
There is one limitation to note. Windows Forms is a part of the .NET Framework, and is therefore available only on Windows platforms. However, if your organization has standardized on the Microsoft? Windows Server System? for client workstations, as many have, then this is not a concern for you.
Applications Need Data
The vast majority of corporate and commercial applications manipulate data, which is usually stored in some central server. For a smart client application to be viable, it must be practical to get the data from the server to the client, allow the client to make changes to the data, and then send the data back again.
Microsoft's newest data access technology, Microsoft? ADO.NET, is designed for just such a scenario. Unlike previous data access models, it was created for distributed use. XML-based containers of data can be created on the server and transported to the client. The client can use the container to work with the data without maintaining a connection to the server, making changes and additions, and then send the container back to the server.
There are two primary technologies than can be used to do the actual transport of data between client and server machines. One is Web services. The advantages of Web services include ease of implementation and configuration, and cross-compatibility to many kinds of servers. Web services can even talk to non-Microsoft servers, though you'll have to do a bit more work to create an appropriate data container.
If all the systems involved in your application can run Microsoft .NET, then another alternative is called .NET remoting. This has some performance and security advantages, but is more difficult to configure. Large-scale smart client systems for internal users often depend on remoting, while systems with users outside the organization are more likely to use Web services.
In either case, it's quite feasible for a smart client application to handle data much more intelligently than a browser-based system could hope to do. For example, if the smart client loses its link to the server, it can maintain a local copy of the data until the server is available again. The smart client application can dynamically adapt its behavior for online and offline scenarios.
Low Cost Deployment
Remember that we started this discussion by asking about the effect of making deployment of smart client systems as cheap or nearly as cheap as browser-based systems. Now, it's hard to get much cheaper in deployment costs than a browser-based application, which has essentially zero deployment costs for each additional user. But it's possible to reduce smart client deployment costs enough to be competitive. And remember, with the potential large savings of a smart client through productivity improvements, taking on some additional deployment cost can make good economic sense.
The key technology allowing cheap deployment is the copy-and-run capability of the .NET Framework. There's no need to perform complex registration of system components as in COM. Just copy the components onto the disk and the Framework does the rest. And with side-by-side execution of multiple DLL versions, DLL incompatibilities are banished.
A limited form of automatic Internet deployment is already included in the .NET Framework, and a more advanced version, code-named "ClickOnce," is slated for the next version. In the meantime, I've found it easy to create custom deployment systems that take advantage of copy-and-run to easily and cheaply produce self-updating applications.
Just as browsers must be installed on the client systems to allow browser-based applications to run, the .NET Framework must be installed on client machines for .NET-based smart clients to run. At present, this does mean taking extra responsibility, because it's not automatically installed for the operating systems (Windows XP, Windows 2000, and Windows 98/Me) used by most client machines. However, installation of the .NET Framework is free for these systems. Over time, as systems turn over, we'll see the .NET Framework become ubiquitous because Microsoft intends to include it with future operating system products. In fact, they've already started by including it in Microsoft? Windows Server? 2003.
Making It All Secure
We've learned in the past few years just how malicious some people on the Internet can be, and we've responded by building more robust security into our systems. But it's clear that new security capabilities are needed for a distributed smart client system.
Fortunately, the design of the .NET Framework includes robust security principles. Besides eliminating vulnerabilities such as buffer overruns, there is a new form of security called code access security. It awards security privileges to pieces of code, based on information about the code, such as where it came from or who wrote it. This security sits on top of the normal user-based security, allowing another layer of protection.
I would argue that a properly designed smart client system is more secure than a typical browser-based system, because the interface between the client and server systems can be more carefully controlled, and because the executable parts of the system can receive only the privileges they need. But it's true that designing such advanced security means learning some new techniques and concepts.
Your Action Plan
You don't have to rush into this smart client world. Any time in the next two years or so will probably be fine—as long as your competitors don't get there first, of course. The users aren't clued in yet. And deployment, while economically reasonable in cost, still requires you to take responsibility for getting the .NET Framework on the client machines.
But these are temporary conditions. There is no doubt in my mind that smart client applications will displace a lot of browser-based applications over the next few years. The reason I have no doubt is that I've already had five clients decide to do it, and they are all thrilled with the results.
That doesn't mean the shift was painless for them. It required developers to learn new distributed architectures and new technologies. In many cases, they had to get better at object-oriented development and user-interface design.
Developers who have invested the last five years in learning browser-based development may not want to change. They are accustomed to being the leading edge developers of their day. But all technologies peak, and then decline, and are replaced by newer technologies. While we will continue to see browser-based applications used in certain scenarios for many years, I believe browser-based development has now passed its peak. The decline from the peak may take a while, but the direction is clear. Get ready for the return of the smart client!