原文- 亚马逊云科技 re:Invent 2022 - 推动云端计算创新,赋能各类应用

Please welcome Vice President, Amazon EC2. Dave Brown. 

Good afternoon, everybody. It's really great to be back at re:Invent and welcome to the 2022 Compute Leadership session. It's been a great few days of the week, just getting started meeting with customers. And I'm really excited to share some of the innovation that we've been doing in compute over the last year and with some exciting, exciting announcements as well.

I joined the EC2 team a little over 15 years ago in Cape Town, South Africa and back then, there were 14 people on the team and I can always say we had no idea what we were actually building. It's been an incredible journey and I was thinking back and there's a memory that I have from November 2008 and that was the day that we actually launched our one millionth instance. And so the little incremental ID that we had inside our data store ticked over to 1 million instances. 

Well, today, I can tell you that every single day on EC2 customers launch over 100 million instances. And to date, we've launched 30 billion instances on Amazon EC2. What's also true is the ambition and the drive that we had back in 2006 when we launched EC2 is still the same today and it's driven the innovation that you've seen from EC2 in all of these years. 

The first is that we're looking to provide customers with tools and services that allow you to reliably and virtually run any workload that you want to bring to the cloud. And we always say we wanna be able to, for you to do it on AWS and on EC2 better than you could do it in your own data center. 

The second thing is that we wanna continuously focus on improving performance while reducing costs. And as you look at the EC2 roadmap over the years, this premise has been something that has driven us to deliver that value to you as a customer.

One of the customers that we work with very closely and have worked with since 2018 when they went all in on AWS is Epic Games. And as you know, Epic developed a game a couple of years ago called Fortnite and maybe yourselves or many of your kids, I know my kids play Fortnite on a regular basis and Epic needs an incredible amount of infrastructure to actually run Fortnite. 

And with AWS and EC2, they've been able to scale that to support their business every time they do a new chapter release or there's some sort of event, Epic literally needs hundreds to millions of calls that they need to be able to scale up literally overnight. And sometimes it's very short notice and the scale of EC2 and the, the diversity of our instance types has allowed them to be able to run that business and do that. It's an incredible feat.

The other thing they've done is recently moved a lot of their workload to Graviton two processes which has allowed them to save further costs while delivering that scale in the talk today. 

I want a couple of cover a few things across EC2 and I have a few guests joining me as well. First, we'll look at our world class scale and performance. We'll have a little chapter on operational excellence. Some of the things that we do behind the scenes to make sure that you have a great instance experience. 

We'll look at some of the workloads that customers are running. We've launched new instances as well. We will be launching a few that haven't been launched yet and we'll also look at some cost optimizations and best practices, give you some insights into how we actually manage costs within ec2. 

And finally, we'll look at some compute services where we're actually bringing compute closer to you and have a very special guest in that section as well. So we've got a lot to get through. We've got a little under an hour. Let's get going.

I like story time. I think we all love to hear a good story. And, you know, I always like to tell some good stories and, well, when we started building compute back in the day when we built EC2, 1 of the funny things is we, we'd get college hires into, you know, for interview loops and we'd say to them just draw EC2 on the white board and many of them actually got it. Exactly. Right. It was actually pretty simple. It was a couple of distributor control plane services that really just use Zen as a hypervisor. And we really focused on more of the automation side of the cloud than actually some of the performance and optimization.

But in late 2007 and early 2008, we started to actually see some challenges as customers brought more and more workloads to AWS and one of them was performance and jitter. Although in the sort of common case, Zen worked really well as a hypervisor. In many cases, customers would actually see latencies and outliers at the P 99 or 99.9 percentile that really made it impossible to run some workloads. And this became the big challenge. How would we go and fix virtualization to actually work? 

And back then everybody was talking about bare metal workloads. Could it work as well as it did in my data center? And actually Peter DeSantis who was in Cape Town at the time, some of you might have seen his talk last night, Monday Night Live, Peter DeSantis was in Seattle and he met with James Hamilton and they had this discussion around. Was there something we could actually do to offload some of these things we were doing in software to hardware?

And, James had heard about Nafa Beshera from our Graviton Anna Purna organization and N A and B and, Na Na A and James actually got together at this little restaurant that's still there in Seattle called the Virginia Inn. And as James tells the story, he said, I was just gonna go for one beer and talk to Nafa about some of the things they were doing with Graviton and then stick around if, if Na A had a good story and what Anna Poa was doing at the time is they were building some custom silicon. 

And so this little restaurant is actually where the Nitro story started because James and N A started to talk about the opportunity to actually offload some of the functionality that we were doing in software to actually hardware and actually do that on ARM based processes. And this is all the way back in like 2009, super early days. 

Well, ultimately, we started building Nitro cards. Eventually we realized we liked the Annapurna team so much. We actually acquired them and they've become an integral part of the AWS team. And what we set out to start the very first instance that used Nitro was actually shipped in 2013. We didn't even have the name Nitro at the time. We were just offloading network. But ultimately, over the years and by 2017, we shipped our first fully Nitro enabled instance. And Nitro still remains a differentiator for AWS today. No other cloud provider has anything like it. 

And what it basically means is we use the Nitro card to run all of our software. So all of our networking, all of our security, all of our IO that's going to local discs, all runs on Nitro cards. We actually use 0% of that central CPU. That's great for two reasons. One is we're not sharing time with that CPU with you at all. There's no time slicing or any sort of steal time that's happening for our applications from a security point of view, that's really good as well. But it also means that you get more of that processor, you get more of the system resources. So from a cost point of view, that's actually beneficial to you as well. So the same instance you get more cause we also have the Nitro security chip which we actually built directly into the motherboard. We design and build our own motherboards as well and the Nitro security chip actually exists on that motherboard. We'll talk about that a little bit more later and then we built our own hypervisor. So we replace them with a scale down hypervisor that runs between the hardware on the Nitro card and dual application and virtualize the machine incredibly performance as well. 

And so when you look at a server today, a Nitro server or an AWS server really at the lower layer, you've got the Nitro cards and they providing things like security storage and network offload. Then you have the Nitro hypervisor that sits on top of that and then we have your instant OS so that's actually your application, whatever operating system you've chosen to run and then we have your customer's application as well. 

So no part of our software runs on that central CPU whether it's Intel AMD or Graviton. 

And we also give you some view in what does nitro actually look like. And so these are the nitro cards. The one on the top left is the very first nitro code we actually got from an aura labs. And we've actually just launched, there are a number of them that are used for different things. Some of them are network offloading. Some of them are ebs and io to local discs and we actually just launched our what we call our night fifth generation card. 

Now there's more than five of them there. The fifth generation card is specifically focused on networking, but over the years, we've actually shipped 20 million nitro cards across ec2, all coming from anna pernal labs. Just an incredible story. 

So where does nitro really benefit you as a customer? And in the very early days of cloud, we used to get special cp us from our cpu manufacturers. And so spec in was actually a meaningful benchmark. If I ran spec in on one cloud or i ran spec in in your data center or i ran spec in in another cloud provider, i might get different schools. 

What's happened over the years is spec in really shows everybody is running really much pretty much the same sky lake processor, cascade lake processor, you know, ice la process or whatever it might be. So you might not see it from a spec in point of view where nitro really benefits you. And this is what's important to your applications is when you think about the io and the performance across the entire application stack because i'm not competing with you for time and cause of that cpu you're not seeing steel time also nitro because it handles all of the io offloaded to cards. 

We get much better network performance, we get much better io to local discs, you get much better ebs network attached storage. And so you can see that in the workloads, you can see that for me, cash d and this is just a mem cash d benchmark. Nothing fancy. We haven't tweaked it. You can see we're as much as 22% faster than another cloud provider. 

You can see that with engine x workloads at scale 16% benefit because of nitra and on redis database solution as much as 27% faster, you know, and that you really see that benefit as you start to scale your application and push performance.

So i wanna dig into a few areas of nitro. And the first thing i want to start with is let's look a little bit at networking and how that actually works in the nitro environment. When we launched our very first instance back in 2006, it was actually one gigabit. And we were, we were actually pretty happy with that. We thought that was pretty good back in the day. But you can see how over the years it's increased. And 2019 was a big step forward. We went to 25 gigabytes standard for nitro cards. Actually our latest generation of intel instances and a md and, and the new gravitons, our sixth generation is actually 50 gigabits per second. 

And then we've done some network optimized instances and that was the first time we got to 100 gigabits per second in 2019. And then this year to re invent, we've now all of our network optimized instances for the 6th and 7th generation are moving to 200 gigabits and also seeing a significant increase in packets per second.

We also launched a 400 gigabit instance in 2020 that was an instance focused on machine learning where customers are looking to run, not just a single instance, but in many cases, thousands of instances with very low latency networking between them and they need a lot of bandwidth. So we thought 400 gigabytes was pretty amazing and we were very happy with that. 

And in this year, we've just announced the availability of a new trainum instance, which is our own custom that's actually doing 800 gigabits per second on a single instance. And peter de santa last night announced the availability of a ranum network optimized instance. It's pretty amazing. You need a network optimized version when you're at 800 gigabits. That'll actually be 1,601,600 gigabits per second. That'll be available early next year.

We've also happy to announce the availability of our six generations. These are intel instances, network optimized instances available today and what these instances are is providing you with the latest ice lake processor, but up to 200 gigabits of networking. Now, that's really important if you're doing a lot of network io between instances in the very common use case. And you've just had my land earlier talking about s3 in a leadership session. When you're trying to download a lot of data from s3, these instances will now allow you to download up to 200 gigabytes per second from s3.

The other thing that's a big improvement here is the packet per second, that packet per second is something we've actually focused on sort of behind the scenes. We don't talk about it that much, but of a lot of network appliance type workloads or maybe analytical type workloads. It's actually packets per second that becomes the bottleneck, not actually just the bandwidth. And so what we've done here is actually doubled the packet per second and we're getting into the many tens of millions of packets per second in these instances. 

The other one is we've optimized the ebs performance. So sometimes you've actually found that your connectivity through to your network attached storage on ebs at 14 gigabits maybe wasn't enough for your workload. And you said, wow, it would be great if ebs had some more performance, what we've done with these instances is actually increase the ebs performance to 80 gigabits per second as well as 350,000 iops to ebs. 

Now that 80 gigabytes is not included in the 200 gigabits. So technically, you can actually do 280 gigabits of throughput if you maximize in this instance, 200 on the network and 80 gigabits on ebs incredible progress and and fantastic work from a nitro point of view.

Next, i want to talk a little bit about storage. This is one of our newest nitro products. Storage has been really challenging in the cloud in many ways. One of the hardest things to actually update if we need to deploy firmware or do updates is actually s sds. Because ssd manufacturers over the time, they've always thought that a customer could just move the workload or shut off the server and apply the update. And we don't get that liberty. In aws, we have to do everything live. And then when you do it live, you gotta make sure that there's no, there's no stall time. There's nothing that's gonna happen that will actually, you'll notice as a customer if we're doing some of that. 

And so last year, we announced the availability of our aws nitro ssd and now including motherboards and cp us and everything else we make, we're also making s sds. And with the nitro ssd, we actually provide the highest performance and lowest latency for the sssd space. And so with nitro s sds, you're actually getting up to 60% lower io latencies just by using those on some of the new instances we've launched. 

You have improved reliability because we can go and update the firmware. We're able to actually go in and fix issues with the drives that we weren't able to do on commodity ds. And then finally, from a security point of view, we've also added a next level nitro security this with to this drive with aes 256, ephemeral keys. 

Ephemeral keys basically mean that there's a key that's generated in hardware every time you launch an instance. No service gets to see that it's never shipped off the drive. But as soon as the instance is terminated, the key is cryptographically destroyed, which basically means that your data is not available to anybody else even yourself. Should you come back and ask us if we, we had it for you. 

We're happy to announce a new feature today on the nitro ssd to take performance to the next level and that's torn right protection, which significantly improves database performance with our nitro s sds running on our i four i and some of our other instances that are using them, what tor and right protection gives you is basically from a database point of view. Very often you have to enable double right protection on your database, which essentially means if your machine loses power or some issue, happens to the drive or the system while the right is in progress, many times that right could be lost. 

And with torn right protection, it guarantees that even under those circumstances that right will never be lost. So from a 16 kilobit point of view, a kilobyte point of view, we can guarantee that those chunks will we went to the database. And so with torn right protection enabled, we've actually seen up to 30% higher database transactions per second, which is a significant performance improvement. 

And r ds is actually using this as well. And ebs is using it as well, so you'll see some of your ebs usage as well as your eye for eye and some of your graviton and storage based instances will start to see significantly better performance because we can make this guarantee about not losing those rights to the disc. 

Obviously, from a cost point of view as well. If you're able to turn that off from a database and you don't need those rights and we're able to give you 30% improvement in transaction latencies. Well, that means you can actually scale down your fleet. So there's actually a cost saving benefit to having to right protection as well. 

This is one of the first features we're launching on nitro s sds. It's some area we're gonna be innovating more on in the future and we're excited to see what else we do. 

One of the database products that's recently been using on nitro SSDs is Aerospike. Aerospike is a low latency data caching high transaction volume database. I'll tell you like we get to work with most database platforms out there. Customers love Aerospike and Aerospike loves SSDs. It knows how to push an SSD and get incredible performance out of it. 

As you can see, Shrini Srinivasan, the CTO and founder of Aerospike provided this quote to us where they said that they had actually seen up to a 70% increase in performance when they started to use AWS Nitro SSDs for the Aerospike workload. Again, that is just a workload that really taxes us to actually get that performance. It's an incredible database. 

So we were very happy to see the performance that Aerospike got from our Nitro SSDs. 

Finally, I want to touch a little bit on security. It's such an important area. It's something that we thought deeply about when we designed Nitro when we took this step. We said, let's redesign the way virtualization works. One of the first things we thought about is if we have to redesign security, how would we think about that from a virtualization point of view?

As I said earlier, we have the Nitro security chip. But we did a couple of things more broadly across Nitro when we designed it. The first thing is everything is encrypted. You can imagine from a control plane perspective with different parts of the system, all of that uses signed and encrypted communications, signed keys that are only active for a period of time. 

The Nitro security chip also gives us secure boot. So any instance that launches we're able to check the system and make sure that from a supply chain point of view, from a firmware point of view, from a software point of view from every single part of configuration of that system, it's exactly as it should be and nothing has been tampered with.

The other thing is patching. I mentioned this earlier with SSDs, but this is actually true across the whole of EC2. I've got to be able to go and make changes behind the scenes, whether it's the kernel, whether it's the firmware, whether it's the hypervisor. And I've got to do that at scale without impact your workload. 

Some customers have workloads that are incredibly sensitive to latencies. Nitro has actually allowed us to avoid doing things like live migration to have to do an update. I can actually go in and do that update on the Nitro cards, update the firmware, update the BMC, whatever it might be.

From a security point of view, the most important thing is we designed Nitro with no remote access. We actually challenged ourselves. We said we never ever want to be able to log into the system, you know, back in the days in the old system, you could SSH into Dom0 and Xen or whatever it might be. With Nitro, we said we don't want that channel. We never ever are going to build it because it means that an employee at AWS part of the team can never ever access a physical machine without customer data on it. It's just not possible. We never built that. 

That's a significant improvement and it's really helped us a lot as the world has moved towards what's termed confidential computing. Confidential computing is a tough term to understand because it's one of those labels, a little bit like cloud was back in the day that gets put on so many different things. 

But really from a confidential computing point of view, now, we see this is how do we make sure that AWS protects customer code and data from any unauthorized access no matter where it might be, whether it's encryption in transit, whether it's on disk or whether it's in the CPU actually doing something or being moved around in memory.

There's two sort of dimensions that we think about when it comes to confidential computing. The first one is protecting you from the cloud provider. You're trusting us to run your workloads on AWS. How do we make sure that we never ever have access to that data? I've described in a lot of detail how Nitro actually helps us do that and we've built it in such a way to guarantee that we never have access to your data.

The second one is, how do you protect yourselves within your organization from other folks, maybe within your organization. The classic example of this is sort of your private SSL key. I'm sure many of you have battled with the problem of where do we put the private SSL key? You might decide you wanna put it on disk or how do I set up permissions? Then you maybe put it in a key store like a Java key store. Well, that needs a password who has the password and you sort of go through this never ending problem of eventually you need two people with separate keys that are gonna meet at some undisclosed location with a safe, right? You can never solve that problem of how you actually make sure it's secure.

Well, we launched a feature two years ago called AWS Nitro Enclaves, which actually uses the Nitro security to solve this problem. What it does is you can enable an enclave within your instance that actually takes a dedicated core. You can take more if you want, you can tell us how many cores you want, but they're dedicated, they're not shared together with some dedicated memory and some dedicated storage. 

You can essentially run an isolated container environment within your instance, the interface from the instance to the environment is actually a vSOC API. It's very limited in what it can do, but you can set up your container in a way that you can effectively ask it questions, you can store private information in there, you can store your SSL private key in there, you can ask it to generate a session key through the vSOC interface.

You actually get you solve that problem of having something that's absolutely secure. There's some really great use cases around this. One of the first things we shipped with our Certificate Manager service is actually the ability to do SSL keys. But many customers over the last two years have worked out really interesting use cases for this.

We actually just shipped two new features for this. One is EDP of Nitro Enclaves for Graviton. We've seen a lot of adoption on Graviton and so we wanted to make sure that Enclaves works in a Graviton environment. 

We also importantly shipped it just today actually for Kubernetes. Many of you running containers are using Kubernetes. You wanna make sure you can use these Enclaves within your environment and they're very straightforward to do and integrate really well with the Kubernetes ecosystem.

So we're very excited to see that out there. 

One customer that's recently used enclaves in a, in a really, really interesting ways. There's been a lot of focus around web security right back in the day, everybody was using cookies to track things that's changed significantly in the last two years. And the trade desk actually needed to do this for their application and they were using unified id 2.0 solution to actually track customer information and, and customers so that they can provide them with a better experience. That's obviously p i information and information that they need to make sure that they treat in a very, very secure way and never has a risk of leaking anywhere. And so with micro enclaves, they were able to build a solution that helped them to improve and ensure customer privacy and advertise a valuable first party information. They're using that in an advertising feature and it's, it's protected and secured for more data driven media buying. And so it actually solved this problem of how do i guarantee that p i information is absolutely secure and never compromised, not just from a cloud provider, but actually within the organization as well. It's a really interesting feature and really taking security to the next level. 

We spend a lot of time thinking about performance. But one of the things we also spend an enormous amount of time on outside of features is making sure that we have operational excellence. We always encourage customers to run in multiple availability zones. We always encourage customers to use load balancing. We encourage you to use auto scaling, we encourage you to do everything you can to guarantee that you'll never be affected by the failure of a single instance. But that doesn't mean in any way that we don't spend time making sure that single instances never fail. And so I always like to say, we don't want to have our head in the clouds and think that customers are going to make their systems, you know, perfectly resilient to everything on e two. We've got to continue to push and make sure that two instances are as highly available as they absolutely could be. And so today, I'm very excited to invite Jeremy Connolly who's a principal technical programmer, within my team and spends an enormous amount of time focused on operational excellence with customers to the stage to talk more about this topic. 

Thanks Dave and good afternoon everyone. I'm super excited to be here today. So customers often frequently have a complex set of needs that they focus on as they leverage the combination of AWS services. One key thing we focus to do well at AWS in order to meet those needs is quality in line with this to minimize operational overhead for customers launch times, availability and reliability are daily touch points at AWS. And in the compute space, one thing, Dave always indicates small things can make a big difference with compute. We often think about the data plane or instances themselves in a cloud environment to make sure that those API s work for you and are operating with a specific latency is important. We're always thinking of how we can improve launch times. And an ephemeral world, customers often are launching instances with a just in time mode of operation. So as a result, AWS pays keen attention on how we can improve launch times over time, but you don't have to take my word for it. As you can see here, some third party benchmark teams have found just how consistent our API s can be in launch times and randomly scaled up thousands of instances with the open source test harness over the course of a few weeks. As you can see represented by the orange dots here AWS alongside one of our competitors. The results need I say showed a big difference over just a few years. 

The EC2 control plane has driven an immediate improvement of 44% on all nitro based instance launch times from an instance pending to running state continuous improvements such as adding additional caching layers in the last half of 2021 and adding workflows that operate in parallel to set up volumes and the underlying two host. In the first half of 2022 these efforts continue to prove our focus on exceeding customer expectations but we haven't stopped there. EC2 control and data plane performance for Linux has seen a median improvement of 29% for nitro based launches launched instances excuse me from instance pending to SSH ready state. Likewise, Amazon Linux two has seen a 27% improvement while also seeing even better median launch time performance over any other Linux operating system, continuous improvements such as smaller RAM disk or early boot process optimization to the interface between the instances and the EBS storage volumes have allowed us to make a big difference. Equally. Windows launch times from a pending to RDP ready state has seen a median improvement of 65% on nitro based instances. And with the fast launch option we released in early 2022 a customer can achieve up to 73% improvement. But as you might imagine, we simply couldn't stop there to further improve. 

We've taken a quality review program that's been in existence since 24,011, excuse me and adopted metrics to further deep dive the customer experience such as the industry known term, a FR or annualized failure rate. While the general industry defines the AFR at the rate at which you replace a failed hardware component at AWS. We hold a higher bar. We actually consider a failure to be any time a customer instance behaves in such a way that they don't expect some examples would be spontaneous reboots. Kernel panics network card link flaps, et cetera. AWS monitors the AFR these instances very closely meeting each week with senior leaders such as Dave to make sure that we're achieving the desired goals and quickly addressing any regressions, focusing tirelessly on these details has led us to achieving 62% AFR improvement over just the last two years. We accomplish this by understanding first, the root cause of each issue and where root cause isn't known, working directly with our manufacturers to further isolate and assist in a path forward because of the scale AWS operates at, we can often identify small moves and quality trends that are not easily identifiable in smaller fleets to further ensure customers instances, availability exceeds expectation. We're working relentlessly on maintaining fleet health and improvements through transparent maintenance technologies such as live migrating the customer's instance to preserve the customer's workload up time as they take, as we take care of power and network network related maintenance under the hood, we now live migrate well over 1 million instances per week on behalf of customers.

So, as you can see at AWS, we're passionate about providing a reliable experience to our customers. Excuse me, it's been great talking to you today and I hope you can see just some of the small things that we're doing to try to make a big difference. Thanks for your time back over to you, Dave. 

Well, thank you, Jeremy. Thank you, Jeremy. That was great. I hope you can see that in some of the workloads that you're running. And I'll tell you from an AFR point of view, we're actually significantly below what we thought was possible. And in reality, nobody's actually paid this much attention to various components, whether it's disks or memory or CPUs at the sort of scale at AWS. 

And when you're paying attention at scale, you can actually get AFRs to a level that you never thought you would get to. And so we're very excited with where we landed. Let's take a look at supporting some workloads and some new instance launches as well. 

Again, we want to go back in time and let's take a look at that very first EC2 instance. This was the very first EC2 instance. It actually didn't even have a name when we started. We later about a year and a half later called the m1 small. You can still launch this today. If any of you would like to use an m1 small, it's still available 16 years later on AWS. 

How we came to decide on these numbers was actually not that scientific. We, we literally just divided a server we had at the time into four and that became sort of the standard instance for cloud computing. Everybody launched one of these and all the other instances have grown from there. 

You know, today we have over 600 instance types. On EC2, we just launched about another 49 or so last night. With some of the new instance launches that just come out. And the important thing, the way to think about this is 600 is a big number. Maybe people say, well, which one do I use? Well, there's some very key swim lanes, right? If you want to need a P optimized instance if you're looking for something that's more memory optimized, you maybe need something with an enormous amount of memory. We have up to 24 terabytes of memory for some of our SAP workloads. Maybe you need a storage instance with some storage IO so depending on how your application works, you can very quickly go and find the instance family that would be right for you. 

We have a number of instance capabilities. We actually now have four processors on AWS with the Graviton processor, our Intel processors A and B and we also recently two years ago at re:Invent announced Apple silicon as well. And we've just launched the M1 Apple silicon. We have a significant other, a number of capabilities and then also options with things like EBS that you can attach to your instance.

So let's take a look at some of our processor types. And what we have available in each category starting with Intel Intel, we've been working very closely with Intel for the last 16 years. They were one of the first or the first processor that we brought to the cloud. And it's been a really, really good collaboration. We work very closely with the Intel team to make sure that we have processes that are suited for the cloud and suited for AWS and obviously have the performance and availability that you would need. 

Today. We have over 350 Amazon EC2 instances that are powered by Intel processors including some of the fastest with the highest all core frequency running on AWS. There are some of the, the more recent instances that we launched, including the network optimized instances that we just launched from Intel and in our storage instance as well, which you spoke about those Nitro SSDs.

One of the really interesting use cases is AI Scouts and actually some of, you know, me and what I do a lot of my weekends is I have two daughters and they are soccer players. And they spend an enormous amount of time on the soccer field and I've actually, I live stream those games myself. And so I've taken that to the next level and live streaming the game. So it looks like Fox with the kids sports. But this is a really, really interesting use case with AI Scout. And what AI Scout is doing is they actually democratizing the process of finding the next great soccer player. It's really challenging. 

"There are so many scouts and coaches out there and they can cover so much of the population sometimes they're incredible players that just never ever get noticed. And so a i scout has partnered together with intel to actually build a new platform using the, the latest intel cp us together with their habana accelerator as well, which we have available in our dl one instance and what they actually do is just using a cell phone and with no other tracking on the player, they're able to actually track certain drills and it's allowed them to actually work with an example as chelsea football club to actually identify a few new players that they would never have been able to find before. So just incredible what they've been able to do with some of the intel technology in the x 86 space. 

Our next processor that we have available is a md and we have an excellent relationship with a md as well. We actually launched the first a md processor from the new epic range on aws back in 2018. And we've gone through the full generation of a md processors and we're currently in the third generation with the milan processor, but they offer 10% lower cost versus the alternative option in x 86. And so a md is really doing well from a price performance point of view when you have an x 86 workload and you're looking to improve price performance specifically with the latest sixth generation our m six a, our c six a and r six. A really great option to actually easily move your workload. It's immediately migra for most applications and you can actually take advantage of that price performance. 

A great example here is sprinkler, sprinkler actually recently moved a large part of their fleet actually over to a md. And they moved to the m five a, the r five a and the c five a and they were able to see significant cost savings in some cases up to 50% for some of their workloads. When moving over to a md." 

Then finally, we have the AWS Graviton processor range. This is a set of processes that we announced our very first one back in 2018. We've iterated on that and the one that we have now available on, on the vast majority of our instances that use Graviton is Graviton Two. 

Graviton Two continues to borrow 40% price performance benefit over alternative comparable x86 options. We've also improved the ecosystem and I'll talk about that on the next slide. It's just how much we've done to make it easy for you to actually move to Graviton. 

And we also have enhanced security is all memory encryption. Every version of the Graviton processor has had memory encryption. So all of your memory is actually encrypted. It's never in the clear when it goes off to memory. 

As I say, we have a vast network of Graviton processes that we've launched over the last number of years. And actually now pretty much every single instance type has a Graviton option. All of them having significant cost savings opportunities. 

You know, the very first Graviton processor we actually launched was actually one of those processors we were using on a Nitro card. It actually wasn't the Graviton processor we were working on in building a server chip. It was actually a Nitro card CPU. And we put it out there in 2018 to really spark the ecosystem. 

What we were trying to do is to say Graviton's gonna be available. Let's make sure operating systems, ISVs, third party software is available and it was incredible to see the response through 2019. And today there's very few applications out there with operating systems that aren't available. If you're looking to migrate to Graviton, it's actually gotten so easy that the marketing team earlier this year wanted to make a point and they decided that we should do this thing where moving to Graviton is as easy as a walk in the park. 

The only challenge we had with this is they gave me a bulldog. And I don't know if any of you have tried to get a bulldog excited, but there was, it was quite a challenge. But what we have seen is moving is really simple and fast. We actually had an enterprise customer that moved in four days, they went from nothing to running in production in four days when I heard, I, I told them it's probably a little on the far side. But they were very happy with what they've been able to do. 

And so it's just incredible to see the progress that customers are making. And again, it's that ecosystem, it's having that software support. And in many, many cases, it's actually much simpler than customers expect it to be. I always tell customers take an engineer or two, give them two weeks and tell them to move the application and let's see what happens, go and do the learning. Many customers get stuck thinking it's gonna be hard and don't actually move. But this is really, really simple. 

One customer that has moved and seen really, really big example is Direct TV. We're all very familiar with Direct TV. This is the streaming application and they've actually moved a lot of their functionality over to Graviton 2 based instances. And what they saw was up to 25% lower operating costs. So from a price performance point of view, moving over to Graviton 2 significantly lowered their costs and improved performance. And so that quote from Jonathan Johnson is VP of Software Engineering at Direct TV Stream and just great to see that use case and what Graviton meant for them. 

Last year at Re:Invent, we announced Graviton 3 and this is the next generation of our Graviton processor. And today we have two instance types that are available with Graviton 3. We had the C7g and then just last night, Peter DeSantis actually launched the C7gn, which is our network optimized version of Graviton 3. But Graviton 3 is actually 25% faster than Graviton 2. And so not only do you get that 40% boost from something like Cascade Lake, now you're getting an additional 25%. 

And I'll tell you in many use cases, customers see significantly more than 25% specifically for use cases where you're doing a lot of single threaded operations like encryption if you're doing SSL or any other IPsec encryption, that sort of thing. Graviton 2 didn't perform as well as Graviton 3. And in those use cases, Graviton 3 can sometimes give you as high as 80% performance improvement of what you would have seen on Graviton 2.

It's also the first instance in the cloud to use DDR5 memory. And so we've been running DDR5 memory now for most of the year with Graviton 3, you'll start to see that on other instance types, but you know, really exciting to be doing that, giving you better performance much more memory bandwidth and then also super important, it's more efficient from a power point of view as well. 

One of my favorite sports is, is formula one. I've been watching it for most of my life and working really closely with the Ferrari team and working very closely with Formula One, which AWS does has been really, really exciting and just the performance that they get out of it has been amazing. 

And yet, you can see this quote from Pat Simmons who they moved over to Graviton 3, our C7g instances and actually found in their use case that Graviton 3 was 40% faster than Graviton 2, which is an enormous performance boost and saving, you know, running green and making sure that you have a small carbon footprint is so important nowadays. We're all looking to lower costs but also trying to improve our sustainability story. And Graviton 3 is a significant step in the right direction and something relatively simple to actually do for most workloads. 

And so with Graviton C3, you're saving up to 60% it uses up to 60% less energy than you would for the same workload on an x86 based instance. So just amazing, we also have a new carbon footprint calculator that you can actually use on AWS that will look at your account and it'll actually tell you what your carbon footprint is looking like on AWS and what opportunities there are to improve that. And Graviton 3 being a big one, as I said, Peter DeSantis announced a brand new instance for us. This is the C7gn. 

And this is actually using a new version of the Graviton 3 processor. So Graviton 3 gave us a big step forward in 25% performance improvement. But we've actually launched Graviton 3e which is a new processor as well, that actually gives you another boost in performance. And especially for vector workloads, we'll talk about that a little bit more with the next use case from a networking point of view. 

This uses our brand new Nitro code. So this is our Nitro v5 card that gives you 200 gigabytes of networking, but also twice the packet per second throughput. So if you're looking for that PPS and performance improvement for a lot of your low latency workloads with very small packets. This is the instance that will help you do that. And then obviously they, they Graviton 3 processor giving you 25% better performance and two x faster performance.

Also in cryptography, you know, designing our own chips has been very interesting for us for a use case called electronic design automation or EDA. When we design Graviton, we actually simulate the Graviton CPU on AWS. We actually use Graviton instances for that and we spend months and years designing that chip running simulations, making sure that it's gonna perform exactly as we need it to and getting out any issues and errors. And this is a really, really large workload. We do that for Graviton. But what's also happened is with EC2, a lot of the chip design companies, anybody doing EDA workloads are actually starting to run on AWS. And we've seen some significant progress there. 

One of them who recently announced going all in on AWS for EDA workloads is Marvel. And you can see Rajib from Hussein, from EDA, from Marvel said, "You know, they are able to move and utilize AWS EDA services. And Marvel will be able to optimize their chip development and accelerate their time to market." 

And so already from a chip point of view, you want to be able to have that capacity when you need it, run those simulations, run it in extreme scale. Make sure you get all the kinks out of the system and then you go and you and you and you actually go and manufacture the CPU.

One of the interesting things about EDA workloads is they need very, very fast CPUs very often customers are paying per core licensing for these EDA workloads and the software that they use. So they want to run at the highest possible frequency per core. 

And so we have a number of instances today, but we're also very happy to announce a brand new instance and it's actually the very first instance to run Sapphire Lake, which is the latest Intel processor on AWS. And so we have available today in preview is the Amazon EC2 R7iz instance running Intel Sapphire Rapids, sorry, not Sapphire Sapphire Rapids. And it's actually running at 3.9 gigahertz. So it's a high frequency CPU very first skew of Sapphire Rapids if you want to try it out for that as well, and it gives you 15% better compute performance versus the previous generation Intel instances. And as I said, it's 3.9 gigahertz. 

Also 40% workload performance improvements for database video processing and also load balancing. So one of the interesting things is even though it was designed for EDA workloads, anything that requires a license, customers tend to like these high frequency chips. So that'll include things like Oracle databases. If you can get more out of each core, well, it's gonna make your license a lot cheaper as well. And so being able to scale up your workloads is really exciting. 

So we're excited to have the R7iz available in preview today, you know, supporting some workloads, is more of a journey. It takes many, many years and one of those workloads that's, that's taken us a while and we've really had to work on it with many folks is, is high performance computing. 

Adam spoke about this a little bit in his keynote earlier today. But you go back five years and if you spoke to anybody in the HBC community, they would have said there's absolutely no way you could run on AWS. You just couldn't use the cloud. Those workloads are just too intense. There's no way that we could make it work. 

And over the years, we've, we've started to solve the problems. Technically, one of the things we did was we launched the Elastic Fabric Adapter which used the new networking protocol we developed called SRD (Scalable Reliable Datagram), which allowed us to reduce latencies on the network to sort of very low microseconds. So sort of 10-15 microseconds of latencies between instances. 

We also focused on improving our performance on different instances. And you can see the the types of workloads that customers are running here. So computational fluid dynamics, that's the one we did with Formula One to improve the airflow around the car which has really changed the racing in the 2022 season, obviously, life sciences in the medical space, genomics, risk analysis, energy and then obviously research as well. So it's just a broad array of use cases for high performance computing.

Earlier this year, we actually launched the HPC6a instance and this is an instance powered by the AMD Milan CPU, providing 100 gigabit of networking over EFA. And the great thing about this instance is we've been able to give incredible performance with the Milan CPU that's allowed us to target HBC workloads. But we've also been able to reduce the cost. That has been the final thing from an HBC point of view and actually making it successful in the cloud is could we do it cheaper than folks could do on premise? 

And I'll tell you anybody that's in the HBC game, they really understand their hardware. They know everything that goes into getting a server set up just in the way that they need it and getting the cost down. And so for us to be able to win that workload has been an incredible challenge. But with HBC6a, we're winning a lot of those, those, those workloads now as customers are starting to move to AWS for HBC.

As Adam laid it out, there's different types of HBC use cases. And obviously, with HBC6a, we winning a lot of the compute intensive workloads, things like computational fluid dynamics. We also recently this was at Super Computing that happened in November, we also the best HBC platform that's the fifth year that we've actually received that award. And that's actually voted for by the readers of the HBC Wire magazine as well. So we're very proud of, of that achievement. 

Adam announced this earlier in his, in his keynote as well, the Amazon HPC6id instance. And so this is really focused at while AMD is focused at the more computer intensive workloads, this one's focused at HPC workloads that are more memory and data intensive. And so that would be things like seismic workloads and doing, you know, analysis of concrete buildings and those sorts of designs. And so this is a a new instance. It's using the Intel Ice Lake processor gives you 200 gigabit of EFA networking. And importantly, it actually gives you local disks and, and a little bit more memory, which is what is needed for some of these data intensive workloads. So you get one terabyte of instance memory, which we think should have been enough and 15.2 terabytes of NVMe storage on that instance as well. 

And again, at a price point that is 2.2 times better price performance for data intensive HPC workloads, things like finite element analysis than comparable x86 instances. So it's really solving both the technical challenge but also the price point for HPC.

And then we're also launching HBC7g instances. And this is the first time we've actually had Graviton in the HBC space as well. Again, both technically and from a price point. So 200 gigabits of networking the best price performance for HPC workloads. And this also uses the Graviton3 processor which is a brand new processor, it offers up to 35% higher vector instruction performance which is incredibly important in the HBC space. So we're pretty excited the array of instances from our AMD HPC6a to our Intel HPC6id and then our HPC7g Graviton instance, we really believe that if you're in the HBC space. There's an instance type that should be working for your workload.

I want to talk a little bit now about machine learning and what's happening in that space. We've obviously, you know, been focusing on machine learning and more and more companies are using machine learning. We have a broad array of machine learning instances, everything from GPUs from Nvidia, both for training and inference. You know, the latest A100 GPUs are available. We also have ADL1 instance from Intel Habana and then we have some of our own custom silicon in this space as well. 

Our P4dn instances we've actually recently upgraded, improved those to P4de4 and extended or enhanced. Allows us to actually provide you with the latest A100 GPU that has 80 gigabytes of local memory as well. And this also has 400 gigabits of networking and customers actually deploy these in very, very large clusters for machine learning training and in many cases, use several 1000 instances and well over 4000 GPUs in each of those clusters. 

You know, if there's one workload that we can really focus on price performance, it's gonna be the machine learning space. And what we see from customers is very often the thing that's limiting them from doing more machine learning is the cost of machine learning. And they often say if they can improve the performance or get the cost down or they would just do more machine learning, it would add more value, they would have more insights for their customers, they would improve their application. 

And so this has been the area and given the growth of machine learning have been a big focus area for us. And you know, we announced our Trainium1 instance last year at re:Invent. And about two months ago, we actually launched the Trainium1 instance. Now the way to think about this is if you're running on a GPU for machine learning training, and you might be using something like TensorFlow or PyTorch or MXNet as a framework, you can actually use that same framework and actually slot in a a Trainium1 instance and actually begin to run on Trainium similar to Graviton versus x86. 

Trainum actually offers up to 50% price performance for things like BERT and ResNet and even GPT3 and some of the bigger models over what you'd get from a GPU. And so it's a significant cost saving opportunity to actually migrate over, use the same frameworks as a layover of extraction, but use the Trainium1 instance. 

And so as I said, it was 50% cost to train savings over comparable GPU instances. We recently launched that and one of the customers that's been working with us there is Hexxon and they've moved over to Trainium. And they actually saw significant cost savings from moving over to Trainium, it's allowing them to train even more models. So excited to be working with them 

A couple of years ago, on the inference side, we launched our, In1 instance, powered by Inferentia. And this did a similar thing, it actually offered up to 70% lower cost than GPU instances. When we launched that instance, actually, it was about 45% lower cost. But over the time, we actually optimized the software and the silicon and actually improved the performance, which you see a similar thing with Trainum over time as well. 

For these GPU workloads, we also have the same framework support and it's also integrated with the rest of AWS. We've been working very closely with Money Forward. And they've been building a chatbot AI chatbot service on AWS and they used our Inferentia instance and actually reduced the inference latency by 97% over comparable GPU instances. So it's not just cost in all these cases, they're saving significant cost. It's also reducing the latency significantly.

And as Adam announced earlier, we're happy to announce the availability of our In2 instance powered by AWS Inferentia2. Now, Inferentia is the the next generation silicon for inference and each instance comes with 12 Inferentia2 accelerators and up to 384 high bandwidth memory gigabytes of high bandwidth memory. The really crazy statistic is if you think about these training models of having hundreds of billions of parameters with Inferentia2, you can actually run inference on a model with 100 and 75 billion parameters on a single instance if you use all of the 12 Inferentia accelerators, which is just pretty amazing. 

To see that scale, it does provide 45% better performance per watt than comparable GPU instances. 

And finally talking a little bit about developers, we wanna make sure that we have workloads that support developers as well on AWS. And many times, we actually went to customers and they said we went all in on AWS, but we still have some developer workloads that we weren't able to move, we just didn't support them in the way that we needed to. And one of those was supporting Apple. 

In many cases, you have an organization that has to build a Apple based applications for iPads or iPhones or Apple TV. And that's something somebody in your organization, there's a pile of Mac minis under their desk that you have to time slice between engineers or maybe it's in a data center. But time slice and provide people with time, there hasn't been a cloud solution. 

And two years ago, we launched E2 Mac instances and we've recently just launched the Mac1 instance that provides the latest Apple M1 chip. As the as the processor on the Mac mini, it's four x better build performance and with 60% better price performance than we had previously on the x86 Mac instances. 

It also with the customers that have moved to this, have found that they both a cost and a time to deliver as well as bugs and that sort of thing just get resolved so much faster because developers actually have access to these instances when they need them, they're not able to build on their own laptop, they're able to run more simulations. You're able to integrate, able to integrate more with the apple ecosystem. Because you have these instances available on demand, you can just shut them down when you don't need them. So we're very excited about what we've been able to do with Apple and the partnership that we've had there 

Talk a little bit about cost optimization. It's really a topic of focus. right now I think for many of us is to thinking about how do we actually reduce costs and we're all looking for ways to lower costs. I wanted to give you some insight into what we do. At AWS, you know, the first thing I would say is we have a leadership principle called frugality and you know, we wanna be frugal in what we do and the way that i describe, you know, to my team is, is can we want to spend money as if it was our own money, not the company's money. 

But the other thing that's super important is frugality is a key part of any design we come up with. If you as an engineer, design something and it's too expensive. Well, you actually haven't solved the problem. You, you can't think about cost later in the design cycle. You gotta think about it right at the start. And so that's the first thing is we, we made that cultural. 

Secondly, there's tools and metrics that we have internally and many of them are available to you and we'll go through them shortly. And then finally, we hold our teams accountable. So with those metrics, we're able to hold our teams accountable. To actually say, hey, are we reducing costs? What are your goals for cost reduction? 

Some of the things we have available to you is you can obviously look at diversifying your EC2 instance types and making sure that you have, you know, things like Graviton or even AMD to reduce the cost there, we have a number of different purchasing models, things like Savings Plans and Spot that allow you to reduce prices and then also being able to match capacity to demand with order scaling which is critically important, being able to give that capacity up when you don't need it. So a lot of really good tools for you to be able to use on the purchasing options. 

Obviously, i said on demand savings plan and spot instances. Savings Plans just massively adopted by customers is where you can make a one year or three year commitment and pay a significantly reduced price for that up to 60% in some cases. And the way you want to think about it for your workload is anything that's sort of sustained usage. You want that to be on Savings Plans, you've got On-Demand and then for your more optional workloads than white over there, that's where you wanna run Spot Instances and Spot Instances. You can save up to 70 80% on the instance type. By just making sure that you move some of those workloads over to Spot 

Savings Plans has been a big improvement as well. It's really simple. You just make a commit to dollar commit to AWS and you get significant cost savings up to 72%. And the other thing is if you commit to a year and you decide to change your instance types, you can just go and change those instance types. It's no longer committing to a specific instance or in some cases, specific regions. And we've actually saved customers $15 billion since Savings Plan was launched. It just an incredible statistic in what we've been able to do.

Another one is Computer Optimizer, which actually looks at your workload. It's enabled for every single customer. We're collecting data on this. You can go in and tell Computer Optimizer to give you recommendations. And it'll actually look at your, your infrastructure and tell you, we recommend moving this instance from an M to AC or moving this from an R to an M. Well, this application is a good, good opportunity for you to save money on Rabbit. 

And we actually use this internally. I run this across all of AWS. We look at the data and we actually ask teams to go and implement those recommendations. We also use machine learning. So the recommendations you're getting are getting better over time. And with Computer Optimizer, we've actually provided 10 billion recommendations to customers with Computer Optimizer for ways that they can, they can save money. And so it really is something that you got to mechanize within your organization. We have the right tools and services that allow you to do it. 

I think if you're just starting, you'll find that there's a lot of low hanging fruit, there's a lot of really good options to actually be able to reduce costs. If you haven't been able to think about it for a while, it gets a little harder over time. But as long as you have the right mechanisms, the right focus and the right culture, you can make some really, really big progress in cost reduction within your team.

Finally, I want to talk about bringing compute closer to you. And so ensuring that you have that compute where you need it. You know, AWS just has such a broad network. Today. We have 30 regions. We actually just launched our 30th region last week in the Hard Road. And we have five regions that are actually in progress right now that we've, we've announced, we've also got a new concept around Local Zones. 

Local Zones is a, it's not a full region. It's more like an availability zone that we actually bring closer to you. So from a latency point of view, if you need a local zone to be close to where you are geographically, we currently have 16 local zones across the US and we've actually launched local zones now globally. So we have four that we actually just launched in the last week, 21 available across the world. And we have another 30 local zones that we're gonna be launching in those locations around the world over the coming months. 

So really bringing AWS closer to you and all of that is connected to our AWS Global Backbone. This is a network that you know, can carry all that traffic, doesn't use the internet. So when you call an AWS service, you actually get onto that backbone as close to where you are as possible and then guarantee that that's encrypted and highly managed all the way back to AWS 

Another offering that we launched a couple of years ago is called AWS Outposts. And this is really about what are those workloads that you might have in a data center that you don't believe you could move into one of our cloud facilities because of maybe a latency problem. And it's just too far away. You need very, very low latency. There could be some other, you know, maybe it's access to a lot of large data or being very close to some other system that you have. 

And so this is really the AWS hardware that we deploy in our own data centers that we're able to deploy in your facility in your data center. It's fully managed and so from a server fails, we're actually monitoring it. You'll receive a new server, you can pull out the old one put in the new one and it's also, it behaves in exactly the same way as all of our other services. So you would use the BU console to launch instances and manage that outpost with ASAP is. 

So it's been really, really popular. We've had a number of customers that have made use of outposts, some really, really interesting use cases. We've, we've had a lot, you know, even from a latency use case, we've also had a number of, of from a data sovereignty point of view where it needs to be in either a specific state or country. So we've been very excited. 

One of the customers that we've been working with very, very closely with outposts is Nasdaq and it's been an incredible journey over the last couple of years where Nasdaq came to us. They've been, you know, they've really lent into the cloud. They've been a great customer of ours since 2008. They came to us and said, we'd love to be able to run the markets on AWS. It was a little terrifying to start with thinking about something like that actually running on AWS. But it's been just an amazing journey. 

You heard us announce Nasdaq's intention together with Adina, the Nasdaq CDO and Adam on stage last year at Reinvents and to talk more about what we've been doing with, with Nasdaq. I'm very excited to welcome Nikolai La Bela to stage the SVP of Cloud Strategy and Enterprise Architecture to talk more about this, Nikolai.

Well, Nicola, it's been an incredible year of delivery and I want to thank you for joining me on stage today and well, tell us a little bit about what you do at Nasdaq.

Well, thanks for having me, Dave. Yeah, I joined Nasdaq in 2013 and I'm currently responsible for enterprise architecture, performance, engineering and capacity planning and focus on strategic initiatives, including enabling our cloud adoption through strategy architecture and a common cloud platform for Nasdaq.

Well, as I said, it's, it's been quite a year but before we get to talking about what we've been doing together in the last year, can you tell us a bit about the journey that Nasdaq's been taking and, and why you decided to use the cloud?

Yeah, absolutely. Well, and to start with a little bit of background, you know, Nasdaq is known for being the world's first electronic stock exchange and for equities and options markets. But our footprint extends well beyond our own markets. Nas a provides mission critical solutions to across the trade life cycle to more than 2300 financial institutions and 100 and 30 marketplaces globally. In addition to 6000 corporates, we've been a cloud pioneer since 2008 and we've worked closely with aws and i recognized early on how cloud technology would enable us to be prepared for the future demands in capital markets. 

We had migrated the vast majority of our surrounding systems prior to 2020. But it was always a matching engine that was really the presented the biggest challenge and that is really the technology that turns an order into a trade. And it's really the center of the capital markets and is key to the efficient, efficient and transparent markets. So that matching engine that was always the challenge. 

Can you tell us a little bit about the matching engine and why that was the the biggest challenge?

Yeah, and and that matching engine, you know, is responsible for seamlessly transacting hundreds of billions of dollars a day worth of, of transactions from millions of investors. And this is also measured in billions of messages a day and to successfully execute that and to manage unpredictable events. You know, the underlying market in the underlying infrastructure for the matching engine needs to be hyper scalable, ultra resilient and highly performant with ultra low latency. 

And you know, you talked a little bit about outpost and outpost provided, presented a great solution for us and enables us to bring the power of the cloud to our clients. And really part of our, our strategy over the past three years, we've worked closely with, with your team and others at aws to really validate and finally tune the out outpost solution to make sure that it operates under the demands of the market. 

And last year at reinvented 2021 we announced our intent to migrate our first options market, nasdaq, mrx to aws. You know, it's easy to get on stage and make announcements. It's much harder to go and deliver. So how have we done in the last year?

Well, I'm delighted to share that the migration of nasdaq cmrx. one of our options markets is well underway and several systems are fully transacted on, on a wo south posts and that infrastructure and transitions gone very well so far. That's amazing. 

So what are the early outcomes you've seen? I mean, some folks said, hey, running on the cloud, is that something you can actually do as a, as a market? What are the results been?

Yeah. And again, i think there's good news here and really by using the highly performant and optimized dc two instances that, you know, we codeveloped with aws and, and really leveraging our multicast capable, dedicated physical network. We've been able to maintain that experience and also improve on that. We measure that by the round trip time or the latency order to trade that's down in the low double digit microseconds. And we've improved that by about 10% on, on outposts. 

You know, additionally, that gives us the ability to use aws in infrastructure on our premise in regions or, you know, with our global client base in, in more than 50 countries around the world. 

And what are some of the future plans for nastic in the cloud?

Well, and we are really just getting started. So we are doing our second market in, in 2023 and it also goes beyond our own markets. So we talked about 100 and 30 clients that we have globally in 50 countries. And we see, you know, a number of, of those kinds of workloads that have, you know, high, either highly regulated or have low latency or proximity or compute intensive workloads. And we together with a w os, we're creating a blueprint to move those workloads to the cloud.

Well, Nicola, it's, it's been amazing working with you and the team. It's been an incredible journey and we've a lot of technical challenges. We've already got solved together, but the results are already speaking for themselves and i, i look forward to, to more success together with Nasdaq x. So thank you very much for joining us today. 

Thank you, Dave. Excellent. Thank you. It's, it's been a, it's been a really, really fun story and you know, just to end off, we have many compute sessions here today with members of the ECT team and the broader AWS team that I really recommend this list if you can find time to catch them at Rein event or maybe catch them, you know, online on YouTube later. 

We, we started off with this little guy that wanted to be a rocket ship and, and you know, it's been a journey for us on EC2 focused on improving price performance and making sure that we provide you with the tools and services and the instance types to really be able to run any workload that you wanna run on, on, on, on AWS.  Here is the remaining transcript formatted:

I think the super important thing is we never allow ourselves to be comfortable with where we are even though we've all collectively achieved. So much and, and been able to do so much together. We really are just getting started. 

Thank you very much for your time.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值