Duwamish案例分析(MS)

23 篇文章 0 订阅
14 篇文章 0 订阅

                                            Duwamish案例分析(MS)
Capacity Planning - Duwamish Online Sample E-Commerce Site
Published: April 1, 2000
By Paul Johns
Based on Duwamish Online Sample E-Commerce Site

Microsoft Enterprise Services White Paper
E-Commerce Technical Readiness

Note: This white paper is one of a series of papers about applying Microsoft® Enterprise Services frameworks to e-commerce

solutions. E-Commerce White Paper Series (http://www.microsoft.com/technet/itsolutions/ecommerce/plan/ecseries.mspx) contains

a complete list, including descriptions, of all the articles in this series.

Credits

Program Managers: Raj Nath, Mukesh Agarwal
Reviewer: Shantanu Sarkar
Other Contributors: Jyothi C M, Laura Hargrave

 

On This Page
 Introduction
 Capacity Planning Considerations
 Microsoft® Windows® DNA Architecture
 Web Servers
 The capacity planning process
 Defining requirements
 Testing
 Results, Analysis, and Configuration Selection
 Conclusion
 Appendix: WAST Best Practices

Introduction
The Capacity Planning white paper is one of a series about Microsoft® Enterprise Services frameworks. For a complete list of

these publications, please see http://www.microsoft.com/services/microsoftservices/cons.mspx.

This white paper addresses key issues in the capacity management function of service management as it specifically applies to

a business-to-consumer e-commerce service solution. Anyone reading this paper already should have read the Microsoft

Operations Framework Executive Overview white paper, which contains important background information for this topic. The

following section provides a brief overview of this information.

Microsoft® Operations Framework Overview
Delivering the high levels of reliability and availability required of business-to-consumer Web sites requires not only great

technology but also great operational processes. Microsoft has built on industry experience and best practice to create the

knowledge base required to set up and run such processes. This white paper is part of this knowledge base, which is

encapsulated in Microsoft Operations Framework (MOF), based on two important concepts, namely service solutions and IT

service management.

Service solutions are the capabilities, or business functions, that IT provides to its customers. Some examples of service

solutions are:

• Application hosting
 
• E-commerce
 
• Messaging
 
• Knowledge management
 

With recent trends in application hosting and outsourcing, the guidance that MOF provides strongly supports the concept of

providing software as a service solution.

IT service management consists of the functions customers need to maintain a given service solution. Examples of IT service

management functions include:

• Help desk
 
• Problem management
 
• Contingency planning
 

MOF embraces the concept of IT operations providing business-focused service solutions through the use of well-defined

service management functions (SMFs). These SMFs provide consistent policies, procedures, standards, and best practices that

can be applied across the entire suite of service solutions found in today's IT environments. The MOF model positions all the

SMFs in a life cycle model shown below:

 

See full-sized image.


More detail on the MOF process model can be found at http://www.microsoft.com/services/microsoftservices/cons.mspx

MOF and ITIL
MOF recognizes that current industry best practice for IT service management has been well documented within the Central

Computer and Telecommunications Agency's (CCTA) IT Infrastructure Library (ITIL). The CCTA is a United Kingdom government

executive agency chartered with development of best practice advice and guidance on the use of information technology in

service management and operations. To accomplish this, the CCTA charters projects with leading information technology

companies from around the world to document and validate best practices in the disciplines of IT service management. MOF

combines these collaborative industry standards with specific guidelines for using Microsoft products and technologies. MOF

also extends ITIL code of practice to support distributed IT environments and current industry trends such as application

hosting and Web-based transactional and e-commerce systems

Top of page
Capacity Planning Considerations
If you're planning to implement an e-commerce or enterprise application, you'll need to familiarize yourself with capacity

planning to ensure that your application system will perform acceptably well. Slow Web servers and server crashes can

encourage your customers to try your competitors' Web sites—and maybe never come back.

Enterprise and e-commerce applications are similar in many ways, but there's one important difference.

In most enterprise applications, growth can be anticipated and planned for because the limiting factor in application demand

is the number of employees who will access your system. (You know how many employees you have and how fast you're growing.

And, unless you merge with another company, you're not going to double in size overnight.).

E-commerce apps are different. The limiting factor is the number of customers who want to access your Web site at the same

time. That number is bound by the number of Internet users in the world, but it's very hard to predict how many will use it

at any given time. Worse yet, that number can change very quickly—an ad campaign, new product, search engine listing, or

favorable newspaper story can cause your site to become much more popular almost instantaneously—doubling (or more)

overnight is not uncommon at all.

So, capacity planning is about planning the hardware and software aspects of your application so that you'll have sufficient

capacity to meet anticipated and unanticipated demand. Duwamish Online is a good example of a Microsoft® Windows® DNA

application, so we'll take a look at it and tell you how we performed the capacity planning for it.

Top of page
Microsoft® Windows® DNA Architecture
In order to understand capacity planning for Windows DNA applications, such as Duwamish Online, you'll need to understand

what a Windows DNA application looks like. The following diagram shows the Duwamish Online hardware configuration.

 

Figure 1: Duwamish Online hardware configuration
See full-sized image.


Windows DNA applications, including Duwamish Online, have multiple logical tiers, and almost always several physical tiers.

You can read more about Windows DNA applications in A Blueprint for Building Web Sites Using the Microsoft Windows DNA

Platform at:

http://msdn.microsoft.com/library/en-us/dndna/html/dnablueprint.asp

The client tier is generally your customer's Web browser connected to your application servers through the Internet. The

client tier isn't shown in the diagram, but is on the other side of the firewall shown above.

On our side of the Internet, Duwamish Online has four identical Web servers that comprise the Web tier that are connected by

Microsoft® Windows® 2000 Network Load Balancing. Most of the logical middle tier components, such as workflow and business

logic layers, run on the Web servers. This is more efficient because the need for communications over the network is

eliminated. These Web servers are protected by the firewall shown above.

In addition to the network connecting the Web servers to the Internet, the Web servers are also connected through a private

LAN to the database tier and other specialized servers. These include the queued components server (for credit card

authorizations and fulfillment), the box that runs the primary domain controller (PDC), and DNS server. In Duwamish Online,

we use a fail-over cluster of two servers connected to a common RAID disk array as our database tier. An administrative and

monitoring network is connected to all of the machines. This means that the Web servers are connected to three networks using

three network adapters.

For a more complete synopsis, see the entries in the Duwamish Online Diary at: http://msdn.microsoft.com/voices/sampleapp.asp

Any of these pieces, or their components, can be a bottleneck, so you'll need to establish ways to monitor their performance

and load—that's why we have a second network and server for administering and monitoring Duwamish Online. Microsoft®

Windows® 2000 provides performance counters that allow you to monitor most of the important bottlenecks—some of the most

important are listed below.

Top of page
Web Servers
CPU
Web applications tend to be processor-bound. Contention issues, caused by more than one thread trying to execute the same

critical section or access the same resource, can cause frequent expensive context switches, and keep the CPU busy even

though the throughput is low. It is also possible to have low CPU utilization with low throughput if most threads are

blocked, such as when waiting for the database.

There are two basic ways to get the processing power you need. You can add additional processors to each server, or you can

add more servers.

Adding processors to an existing server is often less expensive (and less troublesome) than adding additional servers. But

for most applications, there comes a point when adding additional processors doesn't help. In addition, there are a maximum

number of processors that can be supported by the operating system.

Adding servers allows you to scale linearly to as large a Web farm as you need. (Linear scaling means that two servers handle

double the load of one, three servers handle three times the load, ten servers handle ten times the load, and so on.)

Several dual and quad-processor systems were tested for Duwamish Online to determine the most effective machine. Originally,

adding two additional processors didn't help performance at all due to thread contention issues in the Duwamish Online

application. Reconfiguration of the components reduced that contention enough that by adding the third and forth processors

gave about a 30% performance increase—not great, but better performance can mean fewer servers to manage—a definite

advantage. We're looking into the problems that prevent better usage of the additional processors and will publish the

results at a later date.

Memory
Duwamish Online is a relatively simple application and the catalog is relatively small, insufficient memory problems have not

yet occurred.

Remember that RAM access (at about 10ns) is about a MILLION times faster than disk access (about 10ms), so every time you

have to swap a page into memory, you'll REALLY slow down the application. Adding sufficient RAM is the best and least

expensive way to get good performance from any system.

You can make sure your application has enough memory by checking the paging counters (paging should be rare once the app is

running) and the working set size which should be significantly smaller than available RAM in Windows 2000.

Network
There are a number of potential bottlenecks that can occur in your networking hardware.

First, your connection to the Internet might be a bottleneck if it's not fast enough for all the bits you're sending. If your

application becomes very popular, you may need to obtain a higher-speed connection or redundant connections. Redundant

connections also help your reliability and availability.

You can reduce your bandwidth requirements to prevent bottlenecks by reducing the amount of data you send, especially

graphics, sound, and video. Your firewall can also become a bottleneck if it's not fast enough to handle the traffic you're

asking it to handle.

Note that you can't run an Ethernet network at anywhere near its theoretical capacity because you'll create many collisions

(two senders trying to transmit at the same time). When a collision happens, both senders must wait a random amount of time

before resending. Some collisions are inevitable, but they increase rapidly as your network becomes saturated, leaving you

with almost no effective bandwidth.

You can reduce collisions a great deal by using switches rather than hubs to interconnect your network. A switch connects two

ports directly rather than broadcasting the traffic to all ports so that multiple pairs of ports can communicate without

collisions. Given that the prices of switches have significantly decreased in the last few years, it's usually a good idea to

use a switch rather than a hub.

During one test of Duwamish Online, we got some very odd performance numbers—the database was not working very hard and the

Web servers were very busy and very slow. Upon further investigation, we noticed that we'd used a 100Mbps hub to connect the

Web servers with the database server. Because all of the traffic—incoming, outgoing, and inter-server—was going through one

hub, it became swamped, thereby blocking the system from processing transactions quickly. Removing the bottleneck by using a

switch allowed us to test (and scale) successfully.

Database server and disk
The last potential bottleneck—the database—can be the hardest to fix. Other bottleneck fixes are relatively obvious; if the

Web servers are identical, then buy another one; if you need more bandwidth, then get a faster connection or an additional

connection or redundant and/or faster networking hardware. But for read/write real-time data you have to have exactly one

copy of the data, so increasing database capacity is much trickier. Sometimes the bottlenecks will be in the database server,

sometimes they'll be in the disk array.

To some degree, you can increase database server capacity by segmenting your data. In Duwamish Online, database server

capacity has never been an issue (we're running on a relatively small dual-processor machine, but still using only about 25%

of CPU capacity even when all four dual-processor Web servers are running at 100% CPU utilization. So all the data—catalog,

inventory, customer records, and order information—is put onto the same database server.

If database server capacity becomes an issue, there are a number of things you can do. If CPU capacity is the issue, then add

additional CPUs. Microsoft® SQL Server™ makes good use of additional processors. If the disk is the bottleneck, then use a

faster disk array. More RAM would likely help, too, because SQL Server has some very sophisticated caching.

Another option is to split the database across multiple servers. The first step is to put the catalog database on a server or

set of servers. Because the catalog is usually read-only, it's safe to replicate the data. You can also split off read-mostly

data, such as customer information. But if you need multiple copies, replicating the information properly is harder.

But it's possible that you could get so busy that the read/write data has to be segmented. This is relatively simple for most

applications; you can segment based on zip code, name, customer ID. But it takes application programming in the database

access layer to make this work. The layer has to know the server to go to for each record.

SQL Server 2000 makes this easy—with no application programming, SQL Server 2000 supports splitting a table across multiple

machines. This works very well, and gives linear scalability up to the maximum cluster size. In fact, SQL Server 2000

currently is the world's fastest TPC-C system: about 227,000 transactions per minute on a cluster of 12 machines with eight

processors each—67% faster than the previous record. That's estimated to be 575 times the combined transaction volume of

Amazon.com and eBay.

So, although it's harder and more expensive to scale database servers, it is possible to scale them as large as you're likely

to need.

Theoretical model vs. empirical testing

It's possible to develop a theoretical model for the cost of transactions. The TCA model (which is described in another

document) allows you to estimate the cost of each transaction in terms of processor cycles. You can then develop a model with

your projected mix of transactions to determine how many cycles the application as a whole uses.

Models such as this should be able to help you predict how many and what kind of machines to buy. But since these models are

based on empirical testing of the actual application on actual hardware, you can't avoid performance testing by using such

models.

This raises the question of how to stress test Web applications that you've not yet deployed. There are several products

available that run test scripts against your Web application, using a relatively small number of machines to simulate a large

number of clients. The Web Application Stress Tool (WAST) is one of these, we'll discuss it later on.

Duwamish Online decided not to build a theoretical model for two reasons. First, we were going to have to test the actual

application on the actual hardware configuration (rather than a pilot application farm) anyway, so there wasn't any savings

in time. In fact, the analysis would have taken much longer than our somewhat ad hoc methods. Second, most of the existing

models aren't comprehensive enough to explain all of the behavior in a system. For instance, many models can't predict the

bottleneck caused by contention for shared resources or network collisions, nor can they predict contention problems caused

by a mix of transactions. Some models even assume that as you add additional processors to a server, you'll get linear

scaling. This is almost NEVER true for any application running on a symmetric multiprocessing operating system, such as

Windows 2000.

You can use the theoretical model to make your first guesses about what hardware to get and how to configure it, but you'll

still have to do full-scale testing on your deployment farm to confirm that it performs to your requirements.

But the theoretical models can still be useful. First, they're very useful for predicting what will happen to the performance

of an existing Web site if you change the application or add additional users. And they can be useful for running tests on a

relatively small application farm to extrapolate the results so that you make sure to purchase enough hardware. Because

there's sometimes a long lead-time for purchasing sophisticated hardware, a theoretical model can be very useful indeed. This

is especially true where the scalability issues are well known, such as the linear scalability of buying more Web servers.

Theoretical models are less useful for predicting how many processors or how much memory each box should have because the

factors that affect performance are difficult to model—they're complicated and often not well understood.

Top of page
The capacity planning process
You can think of the capacity planning process in five steps, with iteration. During each iteration, you can test and

identify performance bottlenecks and fix them. Performance bottlenecks can be in software, hardware, or in the way components

are configured—or any combination of the three.

 

Figure 2: Capacity Planning Process
See full-sized image.


Top of page
Defining requirements
The first step in your capacity planning process is to define the requirements for the application. For Duwamish Online, we

used some relatively "seat-of-the-pants" methods for defining our capacity requirements. If you have access to statistics

from an existing site, solid marketing research including competitive analysis, sales goals, and projections on average order

size, you should use these to define capacity requirements. And you should certainly update your assumptions as users

actually use your site after you deploy it.

The Duwamish Online site simulates the sale of consumer goods, such as books, t-shirts, and coffee mugs. This simulation is

an accurate one: it verifies and processes (but does not charge) credit cards, and sends orders to the company that does

fulfillment (but these orders are ignored). In order to get people to complete an order, they are entered into a contest if

they complete an order.

Page hits
We really had no clue about how many page hits to expect, so we took a wild guess: since the entire msdn.microsoft.com

cluster receives an average of a bit less than two million hits a day, we assumed that Duwamish Online wouldn't get any more

than that. (We'll probably get far less.) So our requirements specify that the application should be able to handle two

million page hits per day. (To see a different version of capacity planning, check out the way Duwamish Online did it at

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnduwam/html/d4perfgoal.asp.)

Two million page hits per day is an average of 23.14 pages per second:

(2,000,000 pages/day / (24 hr/day * 60 mins/hr * 60 sec/min). However, we know that the page views don't all come at the same

time. There are peaks and valleys in demand. Because the Duwamish Online team does not know a good model for predicting the

sizes of the peaks and valleys, it fell back on an old rule of thumb: the 80/20 rule—at least until it has real numbers. The

80/20 rule guesses that 80% of the hits will be received in 20% of the time.

Using the 80/20 rule, peak usage will be:

(0.8 * (2,000,000))/(24 * 60 * 60 * 0.2) = 92.59 pages/sec.

Concurrent users/concurrent connections
You will also need to consider the number of concurrent connections that can be handled. This is monitored by the production

farm that keeps track of the numbers of page hits. This helps in understanding more about visitors' behavior.

On the other hand, an extremely high number of concurrent connections may negatively impact server performance. Most load

generation tools, such as WAST, cannot generate the exact concurrent connection pattern the real-world operation will

experience. The Duwamish Online team will closely monitor this and compare the production numbers with the lab results.

Concurrent connections are measured on the server side. The concurrent user load, however, is determined on the client side.

Since connect connection tends to be very dynamic and fluctuates a lot, people usually use concurrent user load (which is the

simulated load the stress tool generated) as an index of system load. Note that it was found that you don't really need to

know anything about "concurrent user" or "concurrent connection" in doing pre-deployment analysis. Determining the number of

pages per second will allow you to perform the analysis.

The Duwamish Online team has approximated an average concurrent load of 1000 users, with peak concurrent load of 5000 users.

A quick check shows that these numbers are in the right ballpark: if each user browses five pages, there would be 400,000

unique users each day (viewing 2,000,000 pages total). If the team handles 1000 users concurrently, that would mean we'd have

400 groups of 1000, each browsing an average of 3.5 minutes. This is calculated by dividing 24 hours by 400,000 and then

multiplying by 1000 concurrent users.

Mix of usage
For most sites, actual completed orders constitute one or two percent of visitors. However, because you don't have to pay

with Duwamish Online and because the team only enters you for a prize if you complete an order, it's assumed that 30% of

visitors will complete an order. Note that this high percentage of placing orders means that the database will be used more.

Response time
This is a very important requirement: if users don't get good response, they'll shop elsewhere. The industry standard for

response time is a maximum of three seconds. So we used this as our requirement. The response time went up to five seconds

under peak load.

Don't overstress system--stop before contention makes capacity growth non-linear
Computer systems, especially large ones, are complicated—so it's sometimes hard to predict what will happen to a system as

its load increases. The fact that various processes are contending for the same resources isn't a big deal when the load it

light, but can become a huge bottleneck as the load increases.

A consistent theme that the Duwamish Online team noticed is that the system would respond well as load was increased only to

a point, then other issues would keep the performance from improving (and might actually make performance decline).

Therefore, it's important to note the loads at which the performance suffers, and then to be very sure not to approach or

exceed that load.

CPU

If you graph CPU utilization against response time, you'll see an interesting non-linear relationship: as the load increases

and CPU utilization passes a certain point, the response time starts to grow exponentially. Note that the point varies from

application to application, so you have to test empirically to determine this point.

 

Figure 3: Exponential growth in response time as CPU utilization rises
See full-sized image.


The reason for this exponential growth is typically that the threads are competing for a scarce resource or a commonly used

critical section, causing a lot of context switching which is a relatively expensive operation. In addition, all ASP worker

threads might be busy so that new incoming requests are waiting in the ASP queue for their turn to be processed.

Clearly, you do not want to be anywhere near the point (about 70% of maximum CPU utilization for Duwamish Online Web servers)

where the response time grows rapidly.

In order to avoid resource and critical section contention issues and to allow extra capacity for peak times, the team

decided to keep the CPU utilization around 20% when running at average load, and 50% at peak load.

Network

As mentioned previously, networks can't be run anywhere near their rated capacity because collisions frequently occur and

significantly reduce throughput. You'll want to make sure that you're not using too much network capacity.

Top of page
Testing
Now that it's been established how important testing configuration is, the following section will discuss the testing of

Duwamish Online.

How we tested Duwamish Online
To test Duwamish Online, we set up an application farm running Duwamish Online and use the Web Application Stress Tool, or

WAST on several client machines. These machines are connected to the Duwamish Online application farm where the firewall

would be. (The tests below do not actually use the firewall because it's implemented by the ISP and we were testing in a lab

instead of over the Internet. It would, of course, be more realistic to test with a firewall.)

Web Application Stress Tool (WAST)
WAST is a Web stress tool that is designed to simulate multiple browsers requesting pages from a Web application. It can be

used to generate customizable loads on various Internet services, and offers a rich set of features for gathering performance

data for a particular Web site. You can try to reproduce the test results: just download WAST for free at:

http://www.microsoft.com/technet/archive/itsolutions/intranet/downloads/webstres.mspx

There are other commercial tools available, such as RadView's WebLoad and RSW's e-load.

Usage scenarios
WAST uses one or more "scripts" to simulate a user that is browsing. The team used two scripts that varied primarily in the

number of users who placed orders—one script with a normal 3%, and one with a huge 30% of users placing orders (as described

previously). The usage scenarios are in the table below.

Key Scenario Think Time Bandwidth Throttling
A
 59% Category Page
18% Item Detail
11% Keyword Search
9% Home Page
3% Shopping
 Vary from 0 to 5.5 seconds
 128K ISDN connection
 
B
 30% Category Page
20% Item Detail
11% Keyword Search
9% Home Page
30% Shopping
 Vary from 0 to 5.5 seconds
 128K ISDN connection
 

"Think time" is a random amount of time between the completion of one request and the submission of the next. "Bandwidth

throttling" is simulating the slower connections that many users have. We chose 128Kbps ISDN as a compromise between analog

modems (at 28.8 and 56.6 Kbps) and broadband connections.

Configurations
We tested Duwamish Online on several different application farm configurations using two different types of computers—

inexpensive single/dual processor workstations we call "little bricks," and a more expensive two-to-four processor system we

call "big bricks."

The middle-tier components hosted on different servers were also tested—the database server and the Web servers. For the

Duwamish Online application, performance was best with the components on the Web servers, so that configuration was used for

the rest of the testing and implementation.

The big bricks and little bricks configurations were as follows:

Little Bricks Big Bricks
P3 550 Xeon Single-Proc ($3200)
 2-4 processor: 550MHz Xeon (about $16K)
 
256 MB RAM
 512 MB RAM
 
100 Mbit NIC
 Dual 100 Mbit NIC
 
SCSI 19 GB
 SCSI 20GB
 
Windows 2000 Advanced Server
SQL 7.0 on database server
 Windows 2000 Advanced Server
SQL 7.0 on database server
 

Running the tests
Once the hardware is configured and the software is installed, you can begin running the tests. Let it run for twenty minutes

and then take measurements. This allows the various caches (memory, disk, IIS, SQL, and Duwamish Online) to get a reasonably

stable state. If you don't do this, the performance is very slow because the hit rates on the caches are low when they were

loaded.

After the warm-up period, take data for a couple of minutes, then increase the load, wait for the counters to stabilize, and

take data again. The team used Windows 2000 counters through PerfMon rather than the WAST counters because they found them to

be more reliable.

After the tests are completed, move the data into an Excel spreadsheet and analyze it.

Top of page
Results, Analysis, and Configuration Selection
We were interested in the answers to a number of questions: Are big bricks or little bricks more effective? What happens if

you add more processors? What happens if you add more servers?

Big bricks vs. little bricks
As it turns out, the performance of similar server and workstation hardware isn't much different from each other. The servers

have more sophisticated hardware, such as RAID disk arrays and dual power supplies. This increases their reliability, which

is very important for the database tier. On the other hand, the Web farm tier has a lot of redundancy, so you may want to

consider using the less expensive machines.

Scaling to more processors
For the Duwamish Online application, adding processors to the same box helped, but only to a point. Increasing the number of

processors from one to two resulted in about a 60% increase in performance. These results are good ones, considering the

relative cost of processors and computers and the fact that the dual-processor machine is no more difficult to maintain than

a single-processor machine.

 

Figure 4: Adding a second processor adds about 60% capacity (two-server farm)
See full-sized image.


However, adding an additional two processors did not help very much due to limitations of the Duwamish Online application.

(We're in the process of analyzing and overcoming these limitations—we'll have an article about how we did it later. We've

already gotten a 30% improvement when going from two to four processors—nice, but not as much as we'd like.)

Adding third and fourth processors doesn't add much capacity to the current application.

 

Figure 5: Adding a third or fourth processor
See full-sized image.


Note that your results will vary from the Duwamish Online team results. Your application may scale to four processors or

more. The only way to find out is to test.

Web farm scales linearly with added Web servers
Although scaling up by adding more processors doesn't give limitless scalability, scaling out by adding more servers to your

Web farm works quite well because it scales linearly. Two servers have twice as much capacity as one, ten gives us ten times

as much, etc. (Note that this assumes no other bottlenecks. If you push another piece of the system past its limit, such as

the database server or network, your scalability will end until you improve the capacity of the bottleneck.

 

Figure 6: Adding more Web servers gives linear scalability
See full-sized image.


Your network can be a bottleneck
Recall the testing situation we ran into where we put all the machines on one network using a hub—and swamped the network,

giving horrible performance. Your network design can radically affect your performance, so be careful and be sure to measure

your network usage.

Use switches, dedicated networks instead

The solution was to do two things: use a dedicated network for communications with the Internet and one for inter-server

communications, and use switches rather than hubs. With switches, the capacity of each link is 100Mbps rather than 100Mbps

for the entire network.

More database server capacity than we need
Finally, Duwamish Online is unable to provide any information about how the database server responds when it's heavily loaded

because it was never that heavily stressed. The actual database servers Duwamish Online deploys have only has two processors

(the test machine in the chart below used up to four).

 

Figure 7: Database server does not show stress as it reaches maximum capacity
See full-sized image.


The lines are relatively flat at the right end because the number of pages per second has stropped rising. This can also be

due to caching.

Hardware selection
Hardware selection was relatively simple. Compare the performance figures for various hardware configurations with the

constraints (at peak load: 92.59 pages/second with five seconds response time and 50% maximum CPU utilization; at normal

load: 5.79 pages/sec with three seconds response time and 20% CPU utilization).

Out of the entire set of tested machines and configurations, here are just a few below. The cells that are bold and

underlined indicate working configurations.

As you see, configuration G doesn't work. At peak time, or at 50% CPU utilization, this configuration only delivers 67.2

pages/sec (calculated by extrapolation). This number is lower than the number of pages needed (92.59), so G doesn't work.

Configuration J did not work either. This configuration provided 60.5 pages/sec at 50% CPU utilization.

Configurations I and L passed the requirement tests. L is a big bricks configuration and I is a little bricks configuration.

With Configuration I, if CPU usage set to 70% as the worst-case configuration I gives 87% extra page/sec at peak time for

potential growth (173 - 92.6)/92.6 = 87%). Even if the CPU is stressed to 92%, the response time is still less than three

seconds and this provides enough room for future growth. More important, DB usage remains very low throughout. Because Web

servers scale out very well, this presents a better safety margin and is less expensive than Configuration L.

Dual processor machines for everything, including the database

Duwamish Online chose dual-processor machines for everything, including the database server. Four processors did not add much

benefit to the Web farm. Finally, the database does not need more than two processors because it is not being stressed.

Top of page
Conclusion
What have we learned in planning capacity for and testing Duwamish Online?

Test, test, test
There is no substitute for doing performance testing throughout your development cycle. You can use the results to pick the

best hardware and tune it, as well to make sure components are installed properly.

Web farms scale linearly
Barring other bottlenecks, you can scale your Web farm to handle as many users as you need.

Note that adding more than two processors doesn't help Web servers with this version of Duwamish Online, but that doesn't

mean that other applications won't scale—you'll have to test on your own. For more information on Duwamish Online Web server

scalability, see the Duwamish Online Diary at: (http://msdn.microsoft.com/voices/sampleapp.asp).

Did not have to do anything special for the DB—no partitioning, no nothing
Note that Duwamish Online didn't have to do anything special for the all-in-one database on a relatively small server. If

database performance was an issue, we'd be able to split up the database and/or buy more powerful hardware. Because we used

SQL Server 2000 (the world's best-performing database server software), the database code will never have to be rewritten and

can always be scaled.

Must watch for bottlenecks in memory, CPU, network, DB
As you build your application, be sure to watch for bottlenecks. Don't forget about CPU utilization, paging counts, pages per

second, response time, network collisions.

Monitor while running
You'll also want to be sure to keep an eye on your application as it is running. This also gives you the opportunity to get

real data about peak loads.

How to scale Windows DNA Web applications
First, you need to find the bottleneck(s) to determine where the bottlenecks are. Then, if your bottleneck is in your Web

servers, you can add more servers to take care of the load. It may also be possible to upgrade existing servers to have

higher capacity, depending on your application.

If it's your database that's keeping you from scaling, you have several choices. You can grow the server by adding more

processors and memory. If that fails, you can segment the database, replicating read-only and read-mostly data to the Web

servers. You can also have SQL Server 2000 automatically segment your database, saving you time and trouble.

Finally, if it's your network that's keeping you from scaling, redesign it using switches and subnets.

Top of page
Appendix: WAST Best Practices
Web Application Stress Tool (WAST)
• WAST is a Web stress tool that is designed to realistically simulate multiple browsers that are requesting pages from a Web

application. It can be used to generate customizable loads on various Internet services, and offers a rich set of features

that are desirable for anyone interested in gathering performance data for a particular Web site. WAST is a powerful tool

application with a reasonable number of test clients. Also, developers and customers can reproduce the test results.
 
• More information about WAST is available at:

http://www.microsoft.com/technet/archive/itsolutions/intranet/downloads/webstres.mspx.
 
• There are various sophisticated Web tools, such as RadView's WebLoad and RSW's e-load, that are available commercially. The

Duwamish Online team found WAST to be adequate for their testing purposes. One advantage of using WAST, is that the Duwamish

Online team could continue sharing their experiences with the customer about using this tool.
 

WAST: Best Practices
Clients machines: Estimate the number of clients required for generating the desired maximum load. For one series of tests,

try to use the same number of clients for a better comparison test.

Setting multiplier to stress the servers: Estimate the maximum number of concurrent user requests required to push your Web

server farm to 100% utilization in a pre-test. During the Duwamish Online testing process, it has been found that as long as

the test clients can generate enough load to stress out the server, setting the multiplier to 1 gives better results.

However, the number of test clients is not unlimited and as the number of threads increases, thread thrashing occurs.

When the server farm needs to be stressed out without a sufficient number of test client machines, a higher multiplier might

be needed. For example, if you found that with 4000 WAST threads, all with a multiplier of 1, you still could not stress out

your server (which you could tell from the server's CPU usage), you could use the multiplier to increase the stress. However,

current releases of WAST don't do accurate measurements if the multiplier is set above one, so run one machine with a

multiplier of one and do your measurements using that machine. For instance, you have nine test clients, and run 100 threads

per client on eight of them with multiplier of 5 (which gives you total 4000 "concurrent users") in your tests, and then run

a single multiplier 1 thread on the last client machine. Collect TTLB data from this last client.

Using SessionTrace: Use SessionTrace to record the detail communication between WAST and the Web server(s). When defining a

new WAST script, it is important to find out if all URLs used in the script are functioning as expected and the Web server is

returning the desired response. If not, there is a possibility that you may obtain improved performance results while the Web

server is simply returning error response.

Another good practice with using SessionTrace is that you should set SessionTrace to 1 with type REG_DWORD. Trace can be

turned on in Registry /HKEY_LOCAL_MACHINE /SOFTWARE /Microsoft /WAS. Finally, remember to turn SessionTrace off (0) after

validation of the new script. Otherwise, the disk space will get filled quickly.

Follow HTTP Redirects option: Do not use this option if the script has already recorded the redirected URLs. This implies

that if you check the Follow HTTP Redirects option, then the redirected pages will be counted twice.

Throttling: For standard/benchmark test, use 128 ISDN throttling to generate the average bandwidth of our target users. When

testing the Duwamish Online application, it was found that the more that the throttling was used, the longer the warm-up time

was needed. For example, the more throttling, the lower request rate.

© 2000 Microsoft Corporation. All rights reserved.

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of

the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a

commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date

of publication.

This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS DOCUMENT.

Microsoft, BackOffice, MS-DOS, Outlook, PivotTable, PowerPoint, Microsoft Press, Visual Basic, Windows, Windows NT, and the

Office logo are either registered trademarks or trademarks of Microsoft in the United States and/or other countries.

Macintosh is a registered trademark of Apple Computer, Inc.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值