2004年微软新技术展望大会-与BillGates面对面

2004年微软新技术展望大会-与BillGates面对面,BillGates将亲临北京,演示主题为:"新一代无缝计算大会"
刚刚接到微软公司的电话,确认是否到会,并会寄出入场券...高兴ing...
感谢Zhanbos在博客堂提供的这条信息:http://blog.joycode.com/zhanbos/posts/25436.aspx

以下是BillGates在WinHEC2004上的演讲:

Remarks by Bill Gates, Chairman and Chief Software Architect, Microsoft Corporation
"Seamless Computing: Hardware Advances for a New Generation of Software"
Windows Hardware Engineering Conference (WinHEC) 2004
May 4, 2004
Washington State Convention and Trade Center
Seattle, Washington

View Bill Gates' Presentation

View the PowerPoint presentation that accompanied Bill Gates' keynote:

Download Microsoft PowerPoint 2003 viewer.

BILL GATES: Well, good morning. It's great to see you all here and have a chance to talk about what we're investing in, in software, and the opportunities that creates for far better PCs, far better peripherals, and far better small computers connected up through the PC.

This really is a period where connecting things together, enabling new scenarios around creativity, communications, photos, video, they're becoming a reality. And to make all this come together in a simple way, software has got a lot that we need to do. And that's why Microsoft is investing at record levels because we see that as such an important factor of where we're going.

If we look at different decades of the PC, the thing that was the most critical limiting factor I think has changed every decade. In the 1980s we were very limited by the hardware. Graphics interface really wasn't possible until we got up to the 386, and the memory and capacity that offered. And so it was fascinating to see how we could unleash new capabilities as those chips came along.

Now, that kind of improvement has continued and been super important, but I think the next limiting factor we ran into was that a lot of the interesting scenarios required the machines to be able to connect up to each other. And that's where the miracle of the Internet protocols, low-cost connectivity, came and really revolutionized the breadth of things we could do.

Today we take it for granted that any computer in the world can connect up to any other computer in the world. Now, making it so that that can be done in a secure way, so the information is well understood, so you can discover exactly where the information is you care about, there's still a lot of work to be done, and that's the frontier that we've got today, making sure that software connections are there to make things simple.

At one level you can say this is about consumer scenarios, getting the photos on your camera to be stored and organized in an automatic way, getting your schedule that you may have on different devices to show up, getting all of your storage and favorites so that you're not moving around -- it just shows up, even when you sit down at a device that you don't use very often, as soon as you authenticate, boom, all that information is there.

At the other end of the spectrum we have the idea of e-commerce, finding buyers and sellers, engaging in complex transactions, tracking those things, making them immune to the viruses or attacks that take place, making sure that the security capabilities are there.

And so there's a lot of focus around Web services, standardized protocols, data formats around XML that are going to make sure that these constraints don't hold us back. But it puts us in a very important position in terms of driving the standards, whether it's at the base protocol level, or the schema or the other improvements we need for those software connections.

Now, the proliferation of devices, in a sense, is a good sign. It creates opportunity for us to give people form factors that are appropriate in different contexts, from wall-sized to desktop to Tablet to pocket-sized, even down to wrist-sized type devices and all the different peripherals that go with those. We have different scenarios when you're in the car than when you're not in the car. And so we can expect more hardware design points.

Now, how do we avoid that being a problem for the user? This is where the software connections and high-speed network transports are very, very critical. Will people think it's advantageous to organize their music on many, many different devices? Probably not. Probably they'll just want to do that on one device. And yet we don't want them restricted to that single device.

So bringing certain worlds that have been separate together is a real theme for us, the world of the TV brought together with the PC, Media Center, Media Center Extender; taking the videogame and making it more a member of the PC family in terms of how you develop the software and the kind of community involvement that you have, taking Xbox Live and saying, OK, where can we take everything we've learned about that and make sure that all PC users get connected and get those capabilities?

So we've had the videogame-PC boundary, we've had the set-top-box-PC boundary, we've had the phone in the home, the terrestrial phone-PC boundary, we've had the mobile phone-PC boundary. Every one of those boundaries will be broken down by having the software connectivity and having the user's state move seamlessly between those different devices. And so, conquering those scenarios, making those very simple is quite important to us.

Now, as we do that, some concerns show up, security concerns: Should this device be able to connect up to that device, how do you authenticate that? We get concerns about copyright protection. The Hollywood studios, as we move beyond DVD to that next level of resolution, they're going to insist that there's a digital rights capability that allows them to protect their bits. So standards like HDMI are coming into play, and making sure that those don't complicate the user scenarios so the user can move things between devices, understand the breadth of their rights and we keep it simple while making all of that content available.

Now, in the security front, of course, the thing that has gotten the most attention is the fact that these security attacks can propagate throughout the network and that literally affects millions of machines by using the network, which is a great thing, but using it in a bad way.

If we look at security broadly, there's a lot of challenges we need to solve. The fact that software can be compromised; do you know if somebody's gone in and modified your version of Windows, your applications? The fact that your storage, if somebody takes it, they've got access to your information, the fact that people can flood the network, go in and tap into network conversations, so many different areas that if any one of them is a weak link, you don't have the willingness to move and use these digital approaches.

Authentication is a great example of this. If your password is insecure, then it doesn't matter about anything else, and yet passwords are too simple to guess, people use that password they have at work on insecure, consumer-oriented systems, and so that is an area that we need focus.

When the Internet was designed, it wasn't designed thinking about malicious behavior. It was designed very well for the case where parts of the network were unavailable. But the idea that somebody would be maliciously generating packets was not one of the things that was taken into consideration. And so being able to filter out packets that are there just to flood the network, being able to make sure source addresses are verifiable so you don't have anonymous attacks coming in, making sure the e-mail protocol that you can verify the e-mail really is from who it appears to come from -- those are things that we're now layering on to the Internet standards, and as an industry we need to really pull together and do that.

For this class of attacks that propagate, there's a few key techniques that really will make all the difference and make sure that those things are not the overwhelming problem that in a few cases they have been in the last few years.

Number one here is the idea of isolation and resiliency, making sure that most systems can't be asked to do arbitrary things. Historically we've thought of this as the firewall, and we've thought of the firewall as just being at that corporate boundary. In fact, that is a very important boundary, but it can't be the only boundary because even within a corporate environment you can have a vendor come in with a portable machine, or somebody who has had a portable machine outside the firewall coming in and connecting up, and so we need to think of isolation in a much more sophisticated way. We need to think of firewalling at the individual machine as well as at that corporate boundary, and we need to be able to look at machines as they come in and see, are they up to date, do they have the latest software improvements, and so should they be allowed to come onto that network?

So making it easy for people to be isolated and in an appropriate way, that's an absolute requirement -- giving them free, simple tools that they can audit, are they doing these things?

Now, isolation goes a long way, but it doesn't solve everything because we still need to have machines that have open ports. Web servers have to have port 80 open. Mail servers have to have SMTP open. And so, when there are attacks, in some cases we need to be able to update the software.

And again this is something that should be very simple to do, very simple to set up and require no manual effort. If the improvements are really regression checked, very important critical updates, they should just flow onto these systems that need to have the open ports and you should know which things need which updates, and mail servers only need their updates, Web-facing servers need their updates; that can be made very, very simple. And having that infrastructure in place is something that we've made a lot of progress on, and over the next year and a half, I think, that will get fully in place.

For a consumer machine, it's very simple: turn the firewall on and automatically subscribe to Windows Update. For the corporate environment, it's a little more complex than that, but no more than a single page of guidelines that will make this very straightforward.

Well, of course, we also need to reduce the number of times that there are attacks that can take place. Here we've had a great collaboration with the chip vendors in terms of putting in a feature we call NX that allows us to mark memory as whether it's executable or writable, and not have memory that's both executable and writable, taking away a very classic way that attacks are propagated, the so-called overrun-type attacks. And supporting this NX capability is something that we're putting into this security release of Windows and are coming into those new chips.

Now, on the quality front we can measure very concretely the progress there's been. The first release we did that was after we had this huge focus on reducing the vulnerable code base, on really reviewing code techniques, changing the compiler, that was our Windows Server 2003. And so here you've got a comparison of the previous release of critical security things over the first year after release, and the same thing for Windows Server 2003. So you see about 13 versus 42. Now, we still don't want to have 13. We think that through the right kind methodologies, testing tools, there's probably another order of magnitude improvement that can take place here.

Now, that's in the face of the attackers, of course, trying to get more and more sophisticated and going after the publicity that can be received. There's an irony, which is that because those people are going after that publicity, they're actually forcing the system down the learning curve in terms of its security richness, and actually that's what means that somebody who's going in who just wants to steal information, not to propagate and get publicity, but actually just to do silent security attacks, it's making their job nearly impossible because of this learning curve improvement that's taking place with the high volume software, with Windows in particular.

And so the real security threat, which is not the propagating type but the malicious type, the resiliency we're getting to that is dramatic, and it's absolutely come from being a high-volume piece of software where, because of propagating attacks, we've had to make this not only a priority but invent new things, invent new ways of proving out code correctness and ways of doing the review methodology.

Now, moving off of security, which is a required thing but we feel very optimistic that will be in place, what are the new kinds of scenarios that come out of the software work we're doing that we talk about as seamless computing?

I mentioned we have a large R&D investment, record levels, $6.8 billion in software R&D to drive these scenarios. That R&D budget now has us being the largest in the technology business. IBM would be the second at about $5 billion, of course, spread across their different activities.

And with ours, we don't want that to be about multiple architectures; we really just have the one architecture, taking the industry standards of XML Web services and applying those evenly between the components inside Windows itself, being able to log and have contractual guarantees about how these components relate to each other using the rich methodology around Web services. Even things like management, we're not creating some side set of APIs and terminology; we build that in with that one approach.

And so we think with that kind of focus we can take things like how do you store your memories, how do you know that they're available to you and navigate them in rich ways, over a period of years the hardware and software advances will make it so the idea of digital memories that are easy to access, whether that's movies or photos or documents, that will be a very straightforward thing.

The idea of collaborating in real time, where you've got talking and the screen together, at the consumer side playing music, sharing photos; at the business side working on business plans, organizing calendars, thinking of that PC with a screen as now a mainstream communication device, certainly we can drive forward with that. We need better microphones, better screens, a lot of software advances to drive that, but it really takes the PC and reinforces the central position that the PC is in.

Now, making all these things come together and connect, a lot of hard work going on with that. And some people question, will it be simple, will it really come together? Well, in fact, you heard from Jim that with this plug-and-play initiative, in collaboration with many of you, we are making breakthroughs. I thought I'd share a video that shows the kind of hard work going on and what's likely to come out of it. So let's take a look at that.

(Video segment.)

(Applause.)

BILL GATES: Well, the toaster is not at the top of our priority list, but a lot of hard work getting all those other things to come together.

We hear about a lot of the wireless advances and, in fact, every one of these things in terms of range of capability, performance, cost to implement, the kind of devices they connect up, there's a role for many of these wireless approaches. And yet for the user this isn't really going to come together unless software is hiding a lot of those differences. Making it so that, if you have a choice of connectivity, that the characteristics are well understood, the one that's the lowest cost fits your scenario, you're just automatically connected up to that.

Even the ones we have today, you know, take DSL and cable, you shouldn't need to put any disk in or go through any process; you should just plug in that DSL and, boom, automatically the system should recognize its there, be up and running, no support calls or disks, none of that necessary.

We've actually pushed this forward quite a bit in the 802.11 realm, the automatic recognition, the connecting up. Jim talked about the one piece that has still been very tough, which is getting that WEP key distributed around so, if you want to be working with the secure form of 802.11, that that isn't a lot of effort to get that up and going.

Now, 802.11 is of all of these the one that's the most explosive. It's here, now it's built into all the devices and the cleverness around getting the bandwidth up, getting video and audio to go across 802.11, I'd say that is where the industry has got the most attention and for very, very good reasons. For the connected home, for the connected office, for the mobile hot spot, 802.11 will take on a very, very central role. So continuing to really complete out everything there probably is at the top of the list.

Now, at the same time Ultra Wideband coming along, and effectively taking the amazing work that's been done around USB, and sort of giving us a wireless USB type capability, that's also pretty fantastic. With those kinds of bandwidths connecting up to a storage device, say your portable music device, connecting up to the screen, all of those things are going to be straightforward. And yet making sure the authentication model and the QOS works there will mean that we on the software side have to be working with everyone else to make sure those pieces are done right.

Peer-to-peer mesh networks, this is the idea that, if you have all this wireless connectivity, can't you group people together and share the bandwidth, cache on each other's behalf and share a connection to the backbone so that it simplifies the broadband connectivity and reduces the cost of that? And some of the software work around that is a very key project that we've got in our advanced development group. We think the idea of meshing together, automatically recognizing other systems, that that will become a standard part of PC network over the next three to five years.

Connecting up to these 3G networks, going beyond just the pocket-sized device but bringing the PC in as well, that's become very relevant as their wide-area coverage with reasonable data rates -- and often now pricing plans that are even attractive, some of them unlimited or high capacity type offerings -- now the PC is going to come in and the value to the large-screen device of those connections will always be greater than the value to the small-screen device. In fact, the smaller-screen device, there's a limit to how much data you need to transmit there, whereas with the PC sort of the sky is the limit on that and taking advantage of the investment that's been made there.

So making all these come together, making them simpler, being able to connect up devices in the appropriate way, a lot of opportunity, because the scenario that every screen in the house can have your videos and photos and look at your family calendar, we're certainly not there yet, and we need software and hardware.

At the Web services level, we've done the basics. We have the security now, reliable messaging, transactions are just being finalized. Taking this now and saying, OK, how do you take something like a rich workflow process or an application deployment and have standard ways of thinking of that, that is sort of the next frontier, building on the foundation that's there.

These ad-hoc connections, peer-to-peer type connections that will come out of that we think are pretty exciting. Being able to see everyone who's there, being able to manage that in a very straightforward way, being able to diagnose things that today are very complicated, you know, today if you can't get connected, it's kind of hard to know what it is that is the failure point, and software has a lot to do with that.

The Web service extensions that are these plug-and-play advances fit into that, because scaling from that home network up to that corporate network, having a description of what is the peripheral, what are its capabilities, how do you talk to it, set its properties, provide help information on it, all of those things are very high level protocols and even the idea of how we think of the device driver needs to fit into this because the device driver essentially defines above the transport the kind of operations that are taking place and how those things come together.

So what else is going on with the Web services thing? Being able to take location information and have automatic behavior based on that, getting GPS detection built-in, we see that as a very important thing. Being able to know what are the high bandwidth networks, you bring down a lot of information and cache that so that you use less capacity as you go to lower capacity, more expensive type networks or where you might even be offline, being able to take these high-speed networks and do a lot of the work, including even IPSEC at the hardware level, that's very important. IPSEC is a very important isolation technique. Literally verifying who's trying to make the connection, that's part of the isolation story that we're explaining now in a very crisp way to corporate customers. And so you'll see a very high percentage of the network traffic use IPSEC and so that kind of offload becomes important.

With media streaming, understanding how to prioritize traffic, which packets are more important than the others, what happens if you drop packets, do you keep trying to resend them or should you just give up on, say, some video packets that you were unable to send, getting the different parts of the stack to understand this is pretty important and yet doing it in a way that the security of the intellectual property, the ease of setup, the ease of diagnosing errors, that's why we've really said we can only do this around one unified architecture, building up from the P&P level through the high levels of what we're doing with the Web service capabilities.

So, let's now move forward and think about how is this all going to have the capacity and really make sure that there are no limits for these amazing scenarios. I'd like to show you a piece of breakthrough work that Jim Gray, who's one of our researchers, and Professor Harvey Newman from Caltech have done that really show that the sky's the limit, so to speak, in terms of what can be done. So let me welcome them on stage. (Applause.)

JIM GRAY: Thanks, Bill.

BILL GATES: Welcome, Jim.

JIM GRAY: Well, Harvey, thanks very much for coming. You may know Harvey is a professor of physics at Caltech, but he's also been working to connect the experiments in CERN in Geneva, Switzerland to the physicists in the Americas. Harvey is also a co-leader of the International Virtual Datagrid and first, Harvey, what is the International Virtual Datagrid?

HARVEY NEWMAN: Well, large physics experiments have become international collaborations involving thousands of people. Current and new experiments share a common theme. Groups all over the world are distributing, processing and analyzing massive datasets. The picture you see here shows the CERN LAC Accelerator. Four experiments will generate about 300 terabytes per second of scientific data. Prefiltering reduces that to about a gigabyte a second of recorded data or about 20 petabytes a year.

JIM GRAY: So just to put this in perspective, that's two zetabytes a year. That's like reading 100 million copies of the Library of Congress and keeping a thousand of them on disk.

HARVEY NEWMAN: And moreover we continually reprocess and redistribute it as we advance our ability to separate out the rare physics discovery signals from the petabytes of standard physics events that we already understand. That's why we need gigabyte-per-second bandwidth.

We developed a four-tiered architecture that you see here to support these collaborations, a data process that CERN has distributed among the Tier One to the next level done, the Tier One national centers and also the lower tier, the tier twos for further analysis. The Tier One sites act as a distributed data warehouse for the lower tiers, and refine the data by applying the physicists' latest algorithms and calibrations. The lower tiers are where most of the analysis gets done, and those lower tiers are also a massive source of simulated events.

JIM GRAY: So, just to give you a sense of scale, 20 petabytes is about 100,000 disk drives.

HARVEY NEWMAN: That's right. And as the LAC intensity increases the accumulation rate will also increase, so that we expect to reach an exabyte stored by about 2013.

All of the flows in the picture are designated in gigabytes per second, so it's clear why we need a reliable gigabyte per second network.

And our bandwidth demand is accelerating, along with many other fields of science, growing by a factor of two every year or a thousand-fold per decade. So this is progressing much faster than Moore's Law, and we have to innovate each year and learn to use the latest technologies just to keep up.

This is a picture of the network we're building. We're working with partners on several continents to extend it around the world. The next generation will be an intelligent global system with real-time monitoring and tracking, and with adaptive agent-based services at all layers that control and optimize the system.

JIM GRAY: So Internet 2 is a consortia of universities that's building the next generation Internet, and, to encourage work in high-speed networking, they've set up a contest to recognize advances. And the first contest Microsoft entered and won, but we haven't been paying attention for a long time and, in fact, Harvey's team has been setting the records lately.

HARVEY NEWMAN: To set the records, including the one just certified by Internet 2, we used the network shown on the previous slide and out-of-the-box but tuned TCP/IP in order to move the data from CERN to Pasadena and sometimes vice-versa;last night we set a new record, actually at 4:30 a.m. using Windows on new Isis AMD Opteron and Itanium 2 servers with S2io and Intel 10-gigabyte Ethernet cards and Cisco switches, and we reached 6.8 gigabytes a second or just about 850 megabytes per second.

JIM GRAY: So that's a CD per second.

HARVEY NEWMAN: We needed 64-bit machines for this; 32-bit Xeons top out at about 4 gigabits per second. And so now we're working towards 1 gigabyte per second.

JIM GRAY: So Harvey and I have been collaborating on trying to move data from CERN to Pasadena at a gigabyte a second. And the basic deal is Harvey moves it 11,000 kilometers, about 7,000, 8,000 miles, and I move it the first meter and the last meter. (Laughter.) So Harvey I think has a bit harder job than me, but it turns out that I couldn't do this with 32-bit platforms. I tried and I just couldn't get to 1 gigabyte a second of disk IO bandwidth.

We took a standard Opteron and with about 20 theta drives we get to a gigabyte. With about 48 theta drives on a new Isis AMD Opteron we get to about 2.5 gig raw, and through NTFS stripe sets we get to about 2 gigabytes a second. So I'm pretty comfortable that we can move data around inside the machine at about a gigabyte or more a second.

HARVEY NEWMAN: I'm confident we'll soon be able to move a gigabyte a second or more across the globe, but we still need to do a lot of work to make TCP/IP stable over long distance networks at gigabyte-per-second data rates, and also work is needed to put this into production use, which is what enables the science.

Data-intensive science needs these technologies to allow our global scientific communities to collaborate effectively at gigabyte per second speeds. This is essential for the next round of discoveries at energies that were previously unobtainable.

JIM GRAY: So the basic takeaways here are that networks are a lot faster than you think. They're really running at gigabyte per second speeds. And that means that you need to move things around inside the system, in the IO system, in the memory system, at tens of gigabytes a second in order to keep up, in my opinion.

So the people who did this, Harvey's team and the team at Caltech are going to be in the Blue Room today at 4:00 if you want to hear more about this, but working with Caltech and working with CERN has been a wonderful thing for the Windows group. We've learned a huge amount from them, and I really want to thank Harvey for coming here today and telling us about his work, and with that I think we'll turn the podium back to Bill. Thank you very much.

HARVEY NEWMAN: Thank you. (Applause.)

BILL GATES: Thanks. Good job.

It's amazing to see not only the capability that can be put together there, but actually the PC, the device that is setting those records. And that's long been a dream to get it so that you didn't have to think of high-end computing and PC computing as two separate things. And I think a really fundamental piece of that is this move to 64-bit computing.

Now, those of you who have been around this industry a long time, which is a lot of people here, remember all these memory address space transitions can be very messy. We had 8-bit computing, which was essentially a 16-bit address space. As we moved to a 20-bit address space with the 8086, 8088, we had to go with a whole new instruction set, start over with the software, that was a very big change which the PC ushered in in 1981.

Then we went with the 286 to this kind of unusual 20-bit address space, where the segment registers were shifted over by 4 bits, and it's kind of an unusual thing, but, hey, 20 bits, we were pushing up against 16. I never said 16 would be enough, although there's this fake quote that says that I did. The memory limitation of the PC was the 20-bit address. So, we went 16, 20, 20, 24 with the 286.

And then, with the 386, we went up to 32-bit addressing. And that's been adequate for a lot of things. But at the workstation level and server level, we've really started to push that limit. Now, we're going from 32 to 64, and that would give us a long time before we're pushing the limits again, and it really puts us at the very high end, as much as any computing device can handle. And actually, though, this transition will be a smoother transition than any of those that came previously. In fact, the speed of the transition is the best evidence of that.

There was a lot of pioneering about 64-bit done on other architectures, on the Alpha, then Intel came in with Itanium, really pushing 64-bit into the mainstream. But the simplest way for people to move up to that is where 64-bit has been added to the X-86 instruction set. So, AMD pushed forward on that, really promoted that as a good way to go. And now, of course, Intel will have those capabilities as well.

And so, between now and the end of 2005, we'll go from having very few 64-bit chips out there to virtually 100 percent what AMD ships, and the majority of what Intel ships within less than two years will be 64-bit capable chips. Now, of course, the different pieces have to be orchestrated here, but because the chip companies figured out how to do 64-bit without some huge premium in cost, they can go ahead and get those 64-bit capable systems out there, but those are also systems that run the 32-bit operating system. And so, the relationship we've had with these companies, putting this together, doing the testing, getting the tools in place has really been quite fantastic. And so this is going to be a really wonderful transition. Today, we're announcing Windows XP Edition for 64-bit Extended Systems. This is for Windows XP home use, Windows XP business use, all of these things will benefit from 64-bit. And it's very broad coverage, you know, DirectX, Media Player, wireless, Bluetooth. In fact, it's easier to list the few things that we don't yet have. We don't yet have the DOS 16-bit capabilities, and we don't yet have all the device drivers. Jim mentioned there's a real call to action from us here. Let's make sure the device drivers are not a gating factor for people moving to 64-bit. We've done good toolkits for that. We need your feedback on if there's anything more we can do to make sure that piece comes together very quickly.

So, a phenomenal change, as you've heard from Jim Gray, this is letting us achieve wonderful performance levels. I think you'll see this come in on the so-called workstation, high-end desktop, and server very rapidly, 64-bit server will be commonsense within the next couple of years. At the desktop level, even there, there's a lot of benefit because, of course, the 64-bit chip, in addition to that address space, that's where the big registers, and some of the extra floating point capabilities have come in. So even for people building entertainment applications, they'll want to exploit the unique capabilities of the 64-bit chip. So a very big deal. You can almost imagine in the past, we would have had to have four or five conferences where the only topic was making this address-space transition here. It's just one of many things we can talk about that's going on in parallel, and going on in a fairly straightforward way.

Another big thing we'll see from processors, of course, is multiple cores. And that's an interesting challenge in terms of software being able to fully exploit the parallelism inherent in that. In fact, at the server level, over the next few years, they'll move up, in some cases, to as many as eight cores. And at the desktop level, two cores will become very straightforward, and in some cases will have four cores. So, we're working hard with ISVs to make sure their applications actually take advantage of that. In a sense, it's taking hyperthreading, which is a little bit of parallelism, taking that to a more general, more flexible level, and now saying, hey, let's really make sure we take full advantage of that.

Storage, you know, not enough good things can be said about the rate of improvement in different storage devices, whether it's solid-state storage, or magnetic storage. This is what's enabling photos, movies, large databases, all of this stuff really coming down into the mainstream. In some ways, I think this is the first time I can say that the floppy disk is dead. You know, we enjoyed the floppy disk, it was nice, it got smaller and smaller, but because of compatibility reasons, it sort of got stuck at the 1.44 megabyte level, and carrying them around, and having that big physical slot in machines, that became a real burden. Today, you get a low-cost USB flash drive, with 64 megabits on it very, very inexpensively. And so we can say the capacity there for something that's smaller, better connectors, faster, just superior in every way has made that outmoded. And so, we see an explosion in those USB storage devices. We even see those over time moving over and being something that will connect up to cameras as a way of moving bits between the camera and the PC, although wireless also will come in much more as a key factor there, where you do the automatic transport as soon as the wireless connectivity becomes available.

We're doing a lot of work on non-volatile RAM, so that caching stores, and making boot-up time even faster can take place. People are willing to pay a premium for things that have that immediate-on capability, although we expect over time that will move into the mainstream. The disk people, of course, are doing a great job with their capacities, and letting us abstract some of the array type capabilities, some redundancy type capabilities up into the software level, so that the cost of these hardware systems gets to be a lot lower since, between what's standard built into Windows, and these standard connections, you get what you would have had to buy through very expensive type controllers and software in the storage subsystem itself. So, good cooperation on taking those boundaries and pushing those forward. DVD becoming standard. Read-write, we still have the multiple standards, but complete support for those in the different things that we're doing.

So, how do we think about, at the Windows level, taking this advanced processor capability and these storage capabilities, and really saying, let's provide a benefit of that to users? Under today's Windows, the extra capacity is available, we've had 64-bit file systems for a long time. So we're not running out of address space in terms of what we can do there. But what we are seeing is that the complexity of all the different ways you organize and move information around is overwhelming. The fact that mail, music, photos, e-mail, copying, replication, directory -- all these things are very different -- makes it hard to set things up. You know, in the home, which machine has which files on it? How would you, as you replace a machine, get all that storage updated? Are you overflowing the capacity here, and therefore how would you spread it around in some simple way?

What we need to do is take these volume-level things and make them so that the user doesn't have to think about it. Have a file system that has enough database-like capabilities that all the applications just work at the file-system level, they don't build something that's opaque to the other applications because the file system couldn't do it for them. And that's a key breakthrough in "Longhorn." "Longhorn" is a lot about the fundamentals, the reliability, security, ease of setting things up. It's a lot about visualization, using the latest graphics, and new interaction techniques. And it's a lot about these storage breakthroughs, it's called the Win FS file system that's built in there.

We're making great progress on this, this is the year that we'll get really a pretty fully capable version of that out in the hands of developers to give us feedback, and really understand, did we get this breakthrough advance in storage, did we get all the pieces we need there so that software developers and users will see what we want out of that?

But it really speaks to the idea of having multiple devices, having information on those, and making it easy for the user to navigate, not thinking about volume limits, not thinking about different information types coming in different ways, and a big piece of work to make that come together.

Now, what I said at the beginning here, we see the PC as not just catching up with other computing. We see it really driving the frontiers of computing, the address space, the performance, neat new things. And we have a dedicated focus that we've increased quite a bit on these issues around the PC with Windows, and high performance computing. And so, here to talk about some of the projects that we see and some neat progress on that, we've got Kyril Faenov from our High Performance Computing Group, who is going to show you some really amazing things that he's been able to do working with our partners.

Welcome, Kyril.

KYRIL FAENOV: Thank you, Bill.

Hello, everyone. At Microsoft we're incredibly excited by the possibilities which are brought about through convergence of affordable 64-bit computing and advanced scientific computations. I'm leading a product unit in the Windows Server Group that has a mission to bring high-performance computing to the mainstream. What I would like to show you today is some of the things that are possible right now to help scientists and engineers achieve their potential.

Computational fluid dynamics, or CFD for short, is a simulation technique that calculates the movement of particles as they're dependent on other particles and the environment in which they flow. Examples of that might be water flowing through a pipe, or air circulating through a building, or moving over an object. Simulations like that require a lot of data, require interaction among the parts of the simulation, and a lot of feedback loops.

To make this as close as possible to real-life scenarios, you need a lot more memory capacity than is capable today in 32-bit systems with 4 gigabyte limit. Microsoft's partner, HLRS, which is a high-performance computing department at the University of Stuttgart, developed advanced methodology which compresses the whole simulation cycle from hours or days to minutes or seconds, running on affordable 64-bit computers that can be easily acquired by any organization. Taking advantage of high-performance computing capabilities provided by 64-bit Windows really opens the door for what we like to call personal supercomputing.

I would now like to invite someone from HLRS to join us to demonstrate interactive simulations and visualizations on Windows today. Just a quick note, this demo contains simulation motion in a three-dimensional environment, if you feel you're at risk for photosensitive seizure, or you have experienced motion sickness, you might consider going to another location, or if not viewing the display screen during the demo.

Thank you.

UWE WOSSNER: Hello everyone. I'd like to show you a simulation of water flows through a water turbine. The presentation will start off with a flight over a river, the Neckar, which is somewhere in Southern Germany. We will dive into the water, swim towards one of the turbines, and there you will be able to analyze the water flow, how it flows through that turbine. In the background we're running a simulation which continuously computes that water flow, and this allows us to interact with that simulation, and update these results constantly.

So to run the simulation we are using a cluster of eight dual-processor, X64-based, Intel servers. They are running Windows Server 2003 for 64-bit Extended Systems. And those are connected by a gigabit Ethernet to two client machines, equipped with NVIDIA graphic boards that run the visualization that you see here in the center screen. Furthermore, I've got this Tablet PC that I'm using to steer the simulation and visualization.

The software that we're using is (Fenflood ?) from the Institute for Hydraulic Machinery at Stuttgart, and the visualization software (Covise ?) is developed at HLRS.

So now please put on your glasses and buckle up, tighten your life vest and we're ready to go.

In the middle you can see the building that is hosting the water turbine, and we'll now dive in the water, and here in the inlet channel you can see streamlined and animated particles, representing the flow of water. So the colors on the particles that will appear soon, here we go, they represent the velocity of these sort of water drops. And we can try to follow those particles down towards the turbine. So it is this visualization that you're seeing is kind of state of the art in visualization of simulation results, but to make it clear, what you see is the result of physically accurate numerical simulations, and not just animated computer graphics. And so the real point that we want to bring over is that you can interact with a simulation, and thus interactively change parameters of that simulation, even change the geometry of the product that you're simulating, and thus optimize your product.

So what I will do now is I will close the gate, and let a little bit less water flow through the turbine, and you will see that the guide wings close right now, and what is happening now is that this new geometry is used to derive a new computational mesh, which is divided up into eight pieces, sent to the compute servers, a new simulation is performance, and the results are sent back to the visualization machines, which displays them on the screen now. And you might have noticed that the particle parts actually changed, and they adapted to that new geometry configuration.

So let me zoom in a little. I'll step into the turbine and guide you through a little, we can step in here. So this technique allows scientists and engineers to literally step inside their products, and interactively optimize it for performance, or in this case power output. And the only way to make this simulation fast and accurate enough for real-world applications is to use the capabilities that 64-bit computing is giving you.

So I will fly out a little, and turn off the surrounding geometry. So this is just a simulation result. And now I'd like to go back, swim through the turbine, and swim out into the river again. So what you just saw is only one example of personal super computing, other application areas would be in life science, in the medical field, in molecular dynamics, in architecture, in space engineering. Together with Microsoft, we are working on getting this technology beyond the research labs, and big companies, and into the ordinary engineering companies and supplier firms.

KYRIL FAENOV: This is really some astonishing work you've done at the University of Stuttgart, thank you.

This same simulation you just saw would have required multi-million-dollar super computers just a few years ago, and it might take hours or days, and certainly would not be interactive like what you just saw. This is just one example of a new usage scenario, that's made affordable, and broadly available to universities, research organizations, and corporations worldwide, 64-bit computing is becoming mainstream, and ushering in the era of personal super computing. With more data on the back end, with more data being computed by server farms, you need to consume and visualize the data by the scientists on their workstations and desktops. And we're certain that this will drive adoption of 64 bit personal computers and workstations. Now, we know that many organizations will start looking at investing in advanced computation, business intelligence, server consolidation and other advanced areas. And the only thing that holds back rapid adoption of 64-bit computing is the availability of high quality drivers, anti-virus software, and other platform components. This is really an opportunity for each one of you to come together with Microsoft and take a leadership position in driving adoption of 64-bit computing in the industry.

Thank you very much.

(Applause.)

BILL GATES: So we talked about the processor, and storage, and there are many other hardware advances, far more, to cover here. But, a few I really wanted to highlight, we see hardware advances and software advances coming together in parallel. High resolution displays, the size of the display, the ability to use multiple displays, really has a profound affect on how we think about the user interface. Finding things on the displays, being able to have secondary information that's easier to just glance across and see there, there's been a lot of experimentation there, and now with the lower cost, this is moving into the mainstream. We think every desktop will have at least a 20-inch display, and people who deal with lots of information will either go with a much larger display, or the multi-display type approach.

The camera built into the PC for video conferencing, presence awareness, taking meetings, being able to record those, provide different points of view, and record what was on the white board, the sequence of things taking place there. We're building a lot of software that assumes that camera and takes advantage of the neat things that it can do.

A microphone, we see this as something that's very fundamental. Voice communications will be a standard part of the PC experience, not just talking to the PC for command recognition, but voice annotation, voice instant messaging, Voice Over IP, the microphone will take many different forms. The so-called wireless types, headset will be important, being able to use just a normal phone as a peripheral through a Bluetooth connection, being able to speak at a distance, and have a microphone array that some pioneers in the PC industry are now providing to do that noise elimination, all these different input techniques will bring voice to the center, and we'll make sure the software is there that fully exploits that.

We see now a new class of PCs that are being made smaller and smaller. In fact, the boundary between where does a big phone stop and a small PC start is an area that's just being explored. The PC will go down even to be smaller than those large phones come up to, and people can play around and see what they like best. These type of devices are very tricky in terms of getting the battery life right, getting the input techniques right. Making sure that the applications, as you have that smaller display, that they're usable there, and a lot of collaborations, because we believe in going as far as we can with the smallest PC, and then having that be a PC that's complimentary to your larger PCs, and easy to move the information around. Sensors like RFID, portable storage devices like the portable media center, the phones and the way they connect up and share information, including photos, all of those are experiences where the hardware advance is a necessary piece to make it happen.

For us, we've got to bring speech into the mainstream. Our Speech Server was just launched last month, that's a high-volume, low-cost way of just putting on to a standard server powerful speech recognition, then programming up, using normal Web development tools, Visual Studio, the kind of interaction that might make sense. Over time it won't just be phone and screen as two separate things, but rather the phone and screen brought together. Using 3D graphics in the standard interface, building in the connections out to the PBX for real time communications. Making it easy to have the kind of rich display that you would think of, making the PC the best reading device. Still a lot that we have to do there, but we're big believers in that, because of the richness of navigation, and annotation, and immediacy, and digital rights control that digital approach provides that no other approach does. So visualization, we're just at the beginning of what can be done yet.

I talked about all of these devices and how they work together on behalf of the consumer, the best way to understand that is to see it in action. So I'd like to ask Steve Kaneko from our Windows Hardware Innovation team to come up and show us a scenario of how this is going to come together.

Welcome Steve.

STEVE KANEKO: Hi, Bill. How are you doing? Good to see you.

Good afternoon. I'm very excited to be here, to introduce you to the Windows Home Computing Concept. It's a collaboration between Microsoft and Hewlett Packard. Now, in the next few minutes we're going to use two new concept PCs to demonstrate our vision for how Windows-based products evolve into new roles in the home, focused on the way that families naturally live, the way they experience entertainment, and the way they aspire to manage their communications.

Let's start here in the living room. I'd like to introduce the first concept PC we call the Home Center. Now, we'll see soon, it truly is the center of our digital lifestyle. Its industrial design, it's all about extreme simplicity, no fussy wires, controls, and all done with very refined settings that our customers demand be placed in the home with other equipment. Everything we need for a quality entertainment and communication experience is here. Dual high-definition TV tuners, 802.11 access point, a broadband modem, and a Voice Over IP gateway, all brought together in a very simple Windows Media Center experience.

So let's put this technology to work and see how simple the future could be just to listen to music. I'll go ahead and pick up my remote control here, it could have come from a wireless inductive charger, and I'll simply press the biometric sensor on it. Notice how the system immediately recognizes who I am, going to my personal preferences for music. That's pretty neat. Now, when we have a terabyte of storage in the near future, I would have to go through I could have to go through thousands of songs, a number of different digital media. Now, this rich display on this remote will let me do that very easily, but because what I'm asking is going to be fairly straightforward, instead I'll press the top button on this remote and say, play Jason Moraz's latest album. I'll go ahead and pause that, notice how the system immediately went to work for me, using the processing power of the PC, powerful voice search and recognition software, and the embedded microphone on this remote all coming together to just produce that very, very simple result. It's pretty neat.

Now, it is pretty interesting that the display here is off, and the system seems awfully quiet up here; now is the PC on, or is it off? We happen to be running the demo of the technology that you saw in Jim Allchin's demo this morning, so it's always ready. Windows is actually quietly and consistently running in the background here, intelligently managing the CPU and system resources, which allows us to throttle the CPU down, alleviate the need for a noisy fan, offer the true on and off performance that consumers demand out of consumer electronics-like products in the home. And maybe most importantly, it enables Windows and Windows-enabled services to offer and continue to offer functional value to your hardware products, 24/7, all the time, during the day.

So let me show you another very cool example of how this technology can actually raise the bar on entertainment in the living room. I'll go ahead and drop this PC game into the slot here, and within seconds, instant, head-to-head gaming, with two wireless game controllers, very possibly in the living room, notice how that DVD actually woke the display up. Now, what we see here is a concept PC title game from Electronic Arts called "Need for Speed Underground." Taking advantage of the system's graphics capability, 7.1 Surround Sound, all coming together in an amazing PC experience.

Now, because this is a highly integrated system, I can simply with one press of a button seamlessly go to a number of different experiences here. Now, let's talk about a new experience that we're bringing to the home concept around communication. Our research shows that families more so today than every before really have challenges in the way they communication with each other, in their busy lives, as well as managing activities. Let's see how the home center addresses that.

We'll go ahead and go to My TV, and start playing some live TV, and let's see what happens when a call comes in, boy, it looks like a great part of the show. It's my buddy Jay, and he always seems to call at the wrong time, but I'm going to go ahead and pick this call up from this remote control here. Hey, Jay, it's a little hard to talk right now, I've got a few people over. Can I get back to you? Got it. Thanks, Jay.

Now, obviously, if that were a private call I wouldn't have his voice coming through our system speakers. I can now pick up my cell phone, my home phone, or maybe even use my remote as a handset to privately take that call. Now, notice something else that's pretty interesting here: when Jay calls, the system immediately did a reverse call look-up and knew it was Jay calling, and it presents me with some interesting options here, where I can see some of the shared activities that we had in the past. If I ever wanted to, I can certainly go in and see more information about Jay, whether it's his work phone, home phone, or how to contact him via e-mail. And if there is that ever so popular golf date that we've been trying to get to at 2:00 p.m. on a Friday, to get out of work sooner, I could go to my own calendar, which synchronizes my work and my home calendar, and maybe even schedule that here from the couch. How's that for work-life balance? Pretty neat.

So let's go ahead and end that call. Notice how the system picked up right where we left off, keeping me 100-percent in control of my entertainment experience. And I didn't miss any part of this TV show here. So, speaking of being in control, if families or anyone in the home doesn't want to be disturbed, and wants to just experience a DVD or a television program, they can set the system state to a do-not-disturb. What this does is set the presence outside as busy or away, and I'll reroute all incoming calls directly over to voice mail. So, let's go ahead and pause this.

Now, I want to walk over here to the second concept PC that I'm excited to share with you. We call it the Home Tablet. Now, it, too, can provide the integrated communication, and entertainment experiences that you just saw with the added benefit of being 100-percent mobile, and allowing for the natural pen and touch interaction that is very appropriate for the home. Well, we've had some fun here. Notice that it's charging tray is on the wall, and we're actually using that as a communication center. Family members can tell from a distance with color-coded LEDs that may be assigned to them that they have calls waiting, which is pretty neat when you can see it from a distance. We could also use that auxiliary display, just like we did on the home center, to give us more caller information.

Now, in closer inspection, what we have here is a home page for one of the family members that's customized to the way that she wants it. So, you can see her In box, her personal calendar, her personal playlist, and it can all be easily assessable with one touch of the biometric sensor, which would certainly keep someone like myself out of it. But every family member could have this very highly personalized experience, which we think is very important for the home.

I'll go ahead and take it off its charging tray. I want you to notice that the system automatically knew, I have it in landscape mode, the user interface shifted over. Now, from here, because this tablet is constantly being wirelessly connected to the home center, I can actually watch TV here wirelessly, and use this essentially as a wireless TV in the bedroom, in the kitchen, anywhere else I want to watch TV where it's in range, and it's using the second tuner on the home center, so that I could even go into the living room environment while someone is channel surfing, and watch the separate session going on here. The other advantage is obviously that I could be doing other activities, such as reading e-mail, browsing the Web, etcetera.

Now, as we mentioned, home life is not just about TV watching and entertainment, that's certainly a good part of it, but we'd like to show in the final two demos here how communication and sharing can be enabled with a scenario like this home tablet here. So, I have a couple of messages I would like to respond to. The first one here says, Jeff, what time's dinner? Jeff is calling in or mailing in, asking what time is dinner. Now what I've done is, I actually grouped my family into one shortcut, and from here I can open up a message, and quickly ink, I guess that's dinner is at 6:00.

Now, instead of just sending it to Jeff, I want to send it to the entire family, and what I'll do with that one stroke is broadcast this. That's interesting, I did conversion, sorry, let me try that again. Dinner is at 6:00. OK, with one stroke I'll broadcast this to all the family members, that would be an e-mail message at work, on a PC, it could be an IM message on my daughter's laptop in the room upstairs, as well as an ink message on my son's cell phone at school, just by simply clicking. And what you see here, actually it just went away very quickly, is an example of how that ink message actually went over to the home center, which could have been a PC, just the same.

OK. There's one more scenario here that I'd like to share. Now, Jeff had mentioned that his wife, Kim, had sent us a photostore message, and I'd like to open up that attachment right here on my Tablet. All right. That's pretty cool. So now I can watch this anywhere in the house. Now, let's say this is so neat that I want to share this with the rest of the family. With one stroke I can now bring it up on a large plasma for the whole family to enjoy. It's pretty neat.

All right. So finally, with that same wireless connection, this Tablet could actually be taking downloaded, recorded content from the home center itself, and allow me to take my media outside the house, whether it be onto an airplane, maybe the back of the car, or even a soccer field, or whoever it may be, pretty neat. I'm going to go ahead and put this down.

So that's all we have time for in this demo. If you would like to learn more about the Windows home concept, please visit us in the Windows Showcase. There will be people there to answer any of your questions about the enabling technologies, user scenarios, and business opportunities that were just shown here, and we'll be open all throughout this event.

Just in closing, what you just saw here was an awful lot of the technology that exists in this room, within Microsoft and the software industry. What we've done is bring them together into some very simple, but compelling user scenarios that our customers crave. And we hope you agree that working together very synergistically we can deliver these scenarios, and they don't have to be part of a vision demo. In fact, we can actually deliver these, and make them an amazing reality in the very near future.

Thank you.

(Applause.)

BILL GATES: One of the topics that's always worth addressing at WinHEC is the work we're doing as an industry to make the PC experience a higher quality experience, really understand where the frustrations, where the crashes, problems are coming in, and then not only knowing about that, also being able to fix those for users.

A big advance in this has been the reporting information we get back from Windows XP systems whenever there's a hang or a crash or a number of other conditions that we view as irregular. We see those every day. We categorize those, we understand what driver, what application, what system it's on. And that's been phenomenal both for us and for the industry to be able to say, okay, where are these things not fitting together in the right way, let's change that.

A good example of this is looking at something like a video driver and saying, OK, how can we make sure it works well? Well, it's that getting the update out there that had been a challenge.

Now, with more and more people going up to the Windows Update site, we got some enlistment where people would take new drivers, but the thing we think will be a radical advance in this is where we have improvements that are very well tested and we can put them in as critical fixes that for a higher and higher percentage of people go down automatically on their machine. In fact, we expect over time that will be over 90 percent.

Here we see this at work where we took an NVIDIA fix and instead of just having it on Windows Update where you can go and get it manually, we actually marked it critical and you can see even now where now everybody uses automatic update, enough did that we had a 70 percent drop in that type of problem taking place over just a few month period, so quite dramatic. And what we'd like to do is reach out to all of you and work together on this kind of thing so these need to be fixes that are well understood, that we can blast out to all of the different systems and know that that's a significant improvement for people but we think there are many, many opportunities like this, and so that we really complete that cycle, that even for the current installed base, not just people that buy new stuff, we're fixing things as those come along.

And the benefit of that in terms of user happiness, less support calls, more willingness to try out more advanced scenarios that we're all investing in, I think will be very, very dramatic. So that reporting thing and the way we share that together is a very, very key tool for all of us.

Well, let me close by saying that this seamless approach is about enabling the new scenarios. We need lots of different devices, we need the more powerful forms of those devices. 64-bit is probably the big headline here because this is the first WinHEC where we've really been focused on that and it's so clearly actionable for everybody involved. The plug and play things, that's kind of a headline thing that we see how to build that in with the development tools and the kind of scaling that the Web-service approach provides for that for us, and now a lot of these newer experiences.

And so really the sky's the limit of what we can do together, a lot of years of opportunity as we drive seamless computing. Thank you. (Applause.)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值