HPC on AWS: Solve complex problems with pay-as-you-go infrastructure

Hello. I'd like to start with a show of hands. How many people are currently running high performance computing workloads on AWS? If you could please raise your hands. Thank you.

How many of you are currently running high performance computing workloads in a data center in your own premises? Please raise your hand.

Ok. I'm gonna assume the rest of you maybe aren't even sure what this high performance computing stuff is. But you heard about it in Peter's Monday Night Live or in Adam's keynote. And therefore you're like, I need to check out what is this HPC on AWS?

So thank you for coming.

I had to share this little an anecdote I heard from uh one of my colleagues in the quantum computing realm because I think it it is very reflective of the difference between quantum and traditional computing and that before his session, um he was staying very hydrated like they tell us to do here in Vegas. And so he needed to make a little pit stop before he was presenting and he got turned around in the hotel and couldn't find his way back from the bathroom to the session where he was supposed to be presenting.

And so he's literally bouncing around from end to end, getting told to be in different places, not really sure how to get to his session. Finally, at the last minute gets turned around, gets to the right conference room comes in, starts presenting very haphazardly. And about five or six slides in realizes that his clicker is working. And so his teleprompter and everything shows that the slides are changing, but there was nothing behind him.

And so I think just that uh kind of you're not sure it's here, you're not sure if it's there, you're not sure what the real output is. Um hopefully shows the difference between quantum computing and traditional hbc.

Um and uh Simone, I love you um just a friendly joke at your expense. But really why, why are we here? Um we're here to talk about high performance computing. And, and again, for those of you that aren't really sure what is high performance computing? We're gonna go in a little bit about high performance computing is all around us. Why it's so important to all of our work flows? For those of you that are more familiar with high performance computing. We're gonna talk specifically about how we think about high performance computing on AWS. What we think some of our differentiators are as well as what are the similarities? What are those core components that we think are essential regardless of where you're doing? HPC, whether in, in on premises data center or in the cloud.

And then we'll have a great example from one of our customers at Eli Lilly about how they're doing some exciting things in drug discovery with uh their HPC workloads on AWS and then we'll wrap up.

Yeah, I have to say, um when I started at, at Amazon about five years ago, there was a real strange perception when I showed up to Supercomputing about who is this person from Amazon Web Services? And why are they at a supercomputing conference? Because web is in your name? And I know you can do enterprise workloads, but you can't do real supercomputing, right?

So why are you, why are you here? And um and in their defense, those naysayers, there was a lot of work we had to do to really achieve the performance and to demonstrate the capabilities to where we could say with confidence that we can perform high performance computing workloads in the cloud. And it's taken us a while to get there. But at the same time, there has definitely been a shift in the mindset of those working in the industry to say, ok. At first, it was, you're not performing enough, then it was well, surely you can't be secure and then it moved to, well, ok. But there's no way you can compete with us on cost and then it was ok. Well, maybe I'll burst some of my workloads to you, ok? Maybe I'll move entire pipelines to you. Ok? Maybe I need to think about what this cloud strategy is because this thing seems to be real.

And um we're really seeing that migration of, of workloads start and again, part of that is a mindset change and really us earning the trust of our customers and part of that has been on us in developing. And I hope you see throughout this presentation, some of the features and capabilities that we've brought to our customers to demonstrate why we think high performance computing is so important and why we're putting those resources behind developing those for you.

One of the things when we were talking to customers that we heard in that working backwards process is, you know, I'm starting to move to more of a complex pipeline type of uh workflow and you know, I've got this box of a fixed infrastructure size that I'm performing tetris on daily with my scheduler to say, ok, I've got all these complex jobs. How do I fit them all in there? And it's all about keeping that box tightly packed.

I mean, we as HPC uh gurus always prided ourselves on, what is your utilization? Is it 95%? Is it 99%? Is it 99.9%? But what we never asked is, what is the business value that I'm actually getting of burning that energy? And that's one of those things that especially today as we're looking at the importance of climate change, looking at energy efficiency is not just are we keeping the cores busy, but are they doing things that actually matter?

I think that's one thing that we really need to be challenging ourselves about is what are we doing with the compute that we're performing? And so what we see here is, hey, back in the day, we were doing a great job of keeping that, that box packed, of keeping the all the cores busy. But you know, those pipelines don't really fit nicely into that box and you may have things that are happening out of order and you may not get the the results you want when you want.

And really the box itself is a constraint because when you're thinking about a hardware procurement cycle, you're looking at something that. Ok. Maybe once every four or five years I go out and I tender uh something that's maybe at the 80% solution. I have a majority of my workloads that run kind of in this shape and need this type of uh memory to core ratio for when I'm looking at the, the CPU frequency, the memory on the box.

And maybe I'll throw in a few other exotic uh accelerated nodes just for some workloads. But predominantly I've baked that box for the next 45 years unless I do an an in intermediate refresh, which some industries do. But even then I'm looking at 2 to 3 years before I can pull out even a portion of that box.

So how do we change that with AWS? What does that enable? Well, really, we blow the top off of that box and we say, ok, let's not focus on maximizing exactly the efficiency of those cores that you're burning all the time. Let's only use the cores you need when you need them. Let's help you create bespoke clusters for a certain portion of your pipeline when you need it and apply that infrastructure to that section of your workload that is tailored just for it.

If this part of your pipeline requires accelerated instances, let's create a cluster of accelerated instances. And when that portion is done, let's shut them all down. And if this portion of your workflow requires some super high frequency compute instances, let's spin them up and create a cluster. And when that portion is done, let's shut them back down.

So you're only paying for that effort that is creating business value for you. And as you can see here though, it handles those spikes much easier. Right? We can burst when we need to, we can shut them down when we need to.

And we've had some interesting conversations with customers, especially around. If you think about, if I had a job, let's say it takes 1000 hours to process 1000 core hours. Ok. If that runs 1000 core hours in one hour. So that's 1000 quarters for one hour. I get my results in one hour or that's 100 quarters for 10 hours. It's the same 1000 hour core hour price.

So what is the business value? What is the difference it could make to you and your workloads? If instead of having to wait 10 hours for the results of that calculation, you could have it in an hour. How does that change your cycle of innovation? How does that change? How your research scientists, your engineers, your compute uh intensive applications really interact with one another if you can cut that cycle time down.

And that's the additional value that we think we bring with the elasticity of AWS. But one of the things that remains constant, whether it's HPC in the cloud or it's HPC on premises, is there kind of some core components that we have to have to really even be able to call it high performance computing.

We have to have compute, we have to have networking to compute the, to connect the compute instances, we have to have a perf performance storage system. We have to have a job scheduler of some sort that is actually taking the inputs and scheduling the jobs on the compute. And then we want some sort of orchestration layer where we're providing management of the overall cluster and we have the same in AWS.

And so I'm going to go over each of those components in depth of what our typical HPC infrastructure looks like.

First one is compute, one of the most exciting things about uh this week for me has been the expansion of our HPC portfolio. We can even call it an HPC portfolio now and I'll go into depth about each one of these following this. But earlier this year, we announced HPC six a, our first ever dedicated high performance computing instance and customers have loved it. In fact, they loved it so much. They asked us for more and said, why can't we have this in our favorite flavors of silicon as much as we love AMD? Maybe there we have specific workflows that are tailored towards uh taking advantage of MKL from Intel or special instruction sets from AVX 512, or maybe we're an ARM shop and we really want to get that best price per watt where we're focused on energy consumption and not just on the work that we can get done.

Can we get some HPC instances based on Graviton? And so in Peter's keynote on Monday night, we preannounced coming soon, the HPC seven G based on our Graviton three E. And then in Adam's keynote on Tuesday morning, we announced the general availability of HPC six ID, our first Intel based high performance computing instance.

So now we have this full complement again, whatever your favorite silicon vendor is, whatever your workload is, we feel like we have the HPC instance

Now that's geared to satisfy your demands and we're going to continue to innovate on this as each of our vendors come out with new iterations of silicon. We're gonna continue to bring you new HPC instances. So let's talk a little bit more about that HPC C HPC six a instance. Again, based upon that AMD third generation EPYC line, the Milan. This is more commonly known. This has been really become one of the success stories for us, especially around weather simulation, computational fluid dynamics. We've seen customers from Maxar, from customers like DTN that I'll talk about later that have really been able to dramatically change the price for performance of the weather prediction that they're performing on behalf of their customers and getting models out faster because of the HPC six a instances you can see at the, with the uh HPC six a, it's got 384 gigs of memory with the 96 cores. So you've got a 4 to 1 ratio, which is very friendly for a lot of those uh again, numerical weather prediction algorithms, but it's not really beefy like some of our other customers asked us and again, while we're repeating that, that working backwards process and hearing from customers, we're hearing, we love this HBC six a for certain workloads. But there are a few things that we just wish it had, we wish it had more memory, we wish it had an on uh instance disk and could you beef up the networking a little bit so we can feed it a little bit.

So then that's why we went in the back and kind of cooked up with our friends at Intel, the HPC six ID. And again, with each of those, uh each of those initials is a little bit of a mouthful but they all all matter. Um the HPC six for the sixth generation I for Intel variant D for that on instance, disk and not just in the on instance disk. This is, this is a traditional fat node, this is 15.2 terabytes of on instant storage. Also, we bumped up the interconnect from 100 gigs to 200 gigs. So you can have those fat nodes talking more quickly to each other, not just that, but now, let's look at that memory to core ratio that we talked a little about. This thing has over a terabyte of memory on it to give you a whopping 16 to 1 memory to core ratio. This thing is gonna be a beast and uh it's especially geared towards those FEA workloads and uh also complex uh oil and gas simulations.

Again, we weren't just done there. How do we ensure that our customers that are looking for that performance per watt? They're looking to move into the ARM ecosystem have an HBC instance that satisfies their workloads. And so we went and looked at our current Graviton three that we'd produced for the C seven G and we said, can we kick it up a notch for HPC customers? And what we found is by taking off some of our controls that we put that the vast majority of traditional compute users can't take advantage of that. These vector instructions were boosted by 35%. Now, again, this is not every workload is going to be able to benefit from this, but for those high performance computing workloads that really want that every last ounce of performance. And especially on this high end, this can be a game changer. And so that's why we call, we call this the Graviton three E or enhanced. And that's what we built into the HPC seven G that will be coming soon in the new year.

What enables us to keep iterating on these HPC instances. And why is that so important to high performance computing users is underlying every new instance is the AWS Nitro system. Now, there are lots of talks on how the AWS Nitro system really helps with security and ensuring that the virtualization is as secure as possible from any sort of interruption. But one of the things i want to focus on that especially HPC customers care about is what about that virtualization penalty as you can see on this right part of the slide that top line is pure metal, non virtualized performance on each of these codes and these are common open source HPC codes extremely demanding.

Now looking that virtualized the bars, ok? We haven't removed the entirety of the virtualization penalty. But boy um we're looking at under 1% of performance that is taken up by that virtualization. So you can see the the extreme capability and the flexibility that that Nitro system gives us by offloading all of those other functions to the AWS Nitro system so that the instance itself can give you almost the entirety of its performance.

So we talked about the compute. Now, let's talk about the networking. How do these, how do these compute instances? Talk to one another. One of the things that we really had to work on is what do we do about low latency interconnects? One of those questions i would always get as i was walking around supercomputing is when are you gonna have InfiniBand? Come on, you can't be a real supercomputing provider if you don't have InfiniBand. And I'm like, well, that's not really how we think about HPC because again, InfiniBand is great if you're operating in that box and you're really optimizing past within that box, but it's not the elastic networking and the flexibility in the way that Amazon thinks about networking and within AWS we have these extreme meshes. So you have dozens of dozens of paths to get from one node to the next.

And so when we look at something like InfiniBand where it just says, well, let me find a single path and i'm just going to hammer as much data across that path as I can. It's not as efficient, believe it or not as our, our, our protocol SRD or Scalable Reliable Datagram protocol that takes advantage of that multi path and says, you know what, i've got all these paths to get from a to b, i'm just gonna spray across that network and we'll figure it out when it gets to the other side. And then if there's any congestion anywhere in the network or if something goes down or there's a problem with the switch, you know what? It just goes right around it and we assemble, everything doesn't have to get there. In order we do back, we put everything back in order when it gets to the other end.

And what we've actually found is while yes, there are micro benchmarks if you're doing half round trip ping pong latency, which a lot of my friends in networking love to just talk about those micro benchmarks. Yes, we're not technically uh down to single digit latencies, but when we get down to 10 to 12 microseconds and then we actually put the real world applications on top of EFA you find out that the performance is comparable and the scaling in some cases is even better than InfiniBand. And so that's why EFA has been an essential component of all of our high performance computing workloads.

So we talked about the compute the network. Now the storage, this is near and dear to my heart because uh once upon a time i worked for a company that was uh developing Lustre. Lustre is basically the gold standard of POSIX compliant file systems for high performance computing workloads. It has been beat up and hammered on and improved for decades. And one of the big gaps when i got to AWS was where's our highly performing file system? How, how are our customers? It's great. Now we get the compute network and they can talk to each other because we had EFA in 2018. But we didn't have high performance file storage that we could then point to. I mean, everybody loves S3. But again, traditional HPC people, not exactly moving to object overnight.

So let's talk about how we do file storage. So we went and we developed our managed Lustre offering Amazon FSx for Lustre. And what's so cool about this because again, many people aren't comfortable coming from that traditional HPC background with um puts and gets and S3 type of commands. But what we did is we put S3 behind it and we said, you know what you use your POSIX commands behind the scenes, you can hydrate that data off. Test three, save it off when you need it, rehydrate it back into Luster, do your computations on it when you're done, push it back in S3, push the results to S3, shut the file system down. So not only are we looking at dynamic allocation of compute resources, we're looking at dynamically allocating storage. So this isn't just like ok, the old days of ok, i need a petabyte of scratch storage space and i'm going to leave it up forever. This is dynamically provisioning the storage you need for the workload you need at the time you need it.

But then we said that wasn't enough. So that's great. We've got the high performance. But what about those customers that are operating in that hybrid environment and that have a significant amount of data in their on premises uh filers, whether those be in Isilon and NetApp or pick your favorite uh on on premises hardware. How do we help them present that in a unified view to their workloads. And so that's why we de developed Amazon File Cache with that with that. It presents you a unified namespace. And what's really cool about that is not just can you put your on premises files behind it? Not just can you put your FSx files behind that? But you can put S3 objects behind it and present it all in a unified namespace. Not only does that accelerate your workflows, but it helps you reimagine how you think about storage. And so if you haven't tried out Amazon File Cache, we introduced that just earlier this year, really strongly uh encourage you to take it for a spin.

Uh one of the last components we talk about is scheduler. When we're looking at our system, we have to have a prominent scheduler that can handle all these devices. But as we were talking to customers about schedulers and our existing Batch, native scheduler, they said, you know what i really like the idea behind Batch. I like the idea of a native scheduler, but we have made a business decision to consolidate our infrastructure upon Kubernetes. So if you don't support Kubernetes, I don't know if that's the right thing for me. So we said, ok, we, we heard that from enough customers, how do we help make their lives easier? Because they said we're, we're moving to the Kubernetes and we need some way of scheduling it and we're not really happy with the way that native scheduling works on Kubernetes.

So that's why at KubeCon just earlier this year, we announced Batch support for EKS. So now you can schedule your workloads with Batch via ECS, which was the first back end that we initially initially released on top of top of Fargate, if you want more of a serverless environment or now EKS. So depending upon your infrastructure, we have the right back end for you all of those with the same Batch API support for scheduling your workloads.

Mhm and then the last component of our architecture is that overall management. Ok. How do i actually configure this cluster? How do i set it up? How do i ensure that it's up and running in the way that i'm happy with? How do i attach storage to it? How do i connect my scheduler? And that's where we created AWS Parallel Cluster. A s parallel cluster is based upon an old open source project called CFN Cluster that was co developed by Intel and AWS in the early 2017, 2018, we were looking at how do we help customers on board faster? How do we help them make that migration journey on to AWS? But what we saw is along the way is customers went. That's great. I got this little tool kit. Um it's cool. It creates CloudFormation templates. But is this really a supported service or what do i do? And i have some problems.

And so that's why we said, you know what, it's not enough to just have an open source project behind this. We need to have a fully supported AWS service with a service team that's on call to show that we stand behind this. And that's why we rebranded it AWS Parallel Cluster that allows you with command line functionality and soon to come out uh a UI functionality to be able to instantiate dynamically your clusters. And again, creating those bespoke clusters you need for the portions of your workflow that need them. And then what AWS Parallel Cluster allows you to do is it, it can sit there and, and monitor your, your q depth with your scheduler. And based upon your business logic, you can draw it down to where it's just a head node that's sitting there and maybe you get certain q depth in your jobs. And all of a sudden you start dynamically allocating resources again because it'll see oh work showing up. I better spin up more nodes. Oh queues empty, starts shutting stuff down. All of that allows you to save money and to more efficiently use the resources you have.

And so with that, I'd like to turn over doctor Chen. So the director of enterprise prize HPC for Eli Lilly and company, I'll tell you a little bit about what she and they have been up to thank you.

Thanks Ian. Uh my name is Chen S. Uh roughly two years ago, i got a call uh from one of our senior leaders at Eli Lily. And they said, would you be interested in leading the team of enterprise HPC? And I said, what the heck is HPC? And the reason i got that call is because i actually started my career at Eli Lily um back in 2000. So the job i had before this job was in portfolio strategy. So my group was in charge of making all the portfolio investment decisions, help the leadership to make the decisions through competitiveness assessment of our pipeline, our molecules in Eli Lily research labs. So during that process, i learned that, you know, i have a good understanding of the Lytic pipeline. I have a good understanding of the industry pipeline because that's what's required for my portfolio strategy job. And i learned that AI machine learning is going to be even play an even more and more important role in drug discovery. So i said i want to take an another role in in a more closely related to AI machine learning. That's the reason i got this call.

So at the time, it was really the convergence of HPC and AI machine learning. It was just exactly two years ago. Now, today i'm going to share some of our uh cloud migration journey uh with the collaboration with AWS and help you understand how cloud is integral part of drug discovery here at Eli Lilly.

So when we think of cloud, there are a lot of new terms. There are a lot of new things we need to learn. My traditional HPC team need to gain their certification in AWS for example and other certifications, there are a lot of things to learn. Sometimes they can be overwhelming.

One of the best thing I did after I took the job as Senior Director in HPC is I reached out to AWS and I said that we need to do some deep dive here because I have a lot of really important business critical applications and platforms that I need help.

So what AWS did was that they offered a what they called a line workshop that is offered by AWS ProServe. In the workshop, we selected five business critical applications - Kraan being one of them. But there are also other applications such as our Studio SINGER NAM and GRACE, all from different parts of groups in Eli Lilly that talks to different parts of the business.

First, let me tell you a little bit more about drug discovery process and this is not just true for Lily, this is true for the pharmaceutical industry as well. If you start with a target therapy to, you know, start this process of coming up with target therapy. The first thing is to understand your target. This is a molecular target. That we first before anything moving to the pipeline and we first understand the target, understand the relevance of this molecular target in biology and in diseases.

And as an example, if you think about insulin for type one and type two diabetes, the insulin receptor is a target of insulin. Because if you give the patient insulin, you will hit the insulin receptor inside the cell and have the desired effect to the patient's glucose levels.

If you think of any small molecule, for example, if you take a pill like you know J that another diabetes product that we have at Eli Lilly, the target is called SGLT2, the sodium glucose transporter 2.

So one important part of the target is that the three dimensional structure oftentimes is changed between a disease state versus a normal state, a healthy state. So understanding the three dimensional structure of this protein target is really important because we can have we can if we can understand what is normally a healthy protein target would look like. And after we design this molecule to combine to bind with this diseased target. And if the three dimensional structure, for example, change to be the same as a normal target and then we will have more chance to be successful, for example.

And of course, this is iterative process. So not just having one time, one snapshot of this targets three dimensional structure, but also in a dynamic way in both tissues and dynamically throughout this progression.

Now, crow is something you probably heard the Nobel Chemical Chemistry Prize about roughly some years ago, a few years ago that crow M won the Nobel prize and really revol the way we view how we gain insights from three dimensional structure.

Because before rowan, we spend a lot of money, we invest a lot of resources on x ray crystallography. But with chian, we can, we can gain the three dimensional structure in size much faster. And, and because we can gain, we can do it in much faster, we can do more iterations within a certain time frame. So that's a really good thing.

The second is that this is something that will offer us a much higher resolution in a lot of times than the traditional x ray approach. So that's also good because we want your results to be accurate.

A third point is actually it's cheaper because of the speed. Traditionally, when we, you know, we, we aim for the x ray, the one thing we, before we, we do the x ray crystal gray, we have to grow the crystal form of the target, often times it takes years and sometimes it's impossible to form the crystal because either you can't form the crystal or the protein target will form the crystal. But it's not in a uniformed format that you can actually, you know, look at the structure. So this is really important.

So on the right side of the slide, you can see the trend, these are structures, three dimensional structure that have been identified with crow m and deposit into the public database. So everybody have internet access can have a knowledge about three dimensional structure of those proteins.

Now, this is one of the user case, one of the five use user cases that we started with AWS online workshop. And this is together with the collaboration with AWS and also a company called co square. We together designed this architecture which I'm also going to show you a more detailed diagram in the next slide. But in a nutshell, what we have is that once the data come off the microscope, it will immediately be uploaded to the S3 bucket. And then from there, the scientist can just log in from end terminal and then they log into the head node. In this case, we listed crow spark, which is a commercial, commercially available software that we use to analyze crow m data. But we also have other software open source and commercial that could also benefit from this architecture diagram.

So what the scientists do is that when they're ready to analyze the data, they would hydrate the data from the S3 bucket to FSx for Luster. And so that the data would have a high speed connection with the GPU compute resources we have. Now, in this case, we listed the two different GPU resources - V100 T4 and obviously, these are just examples. And the reason we list two different type of GPUs is they have different speed. They also have, they also have different costs associated with them.

So we can optimize the return for investment if we need a job to be done really fast, we just we spin up V100. If the job is low priority, then we could just send it to T4, which really help the scientists to prioritize the most important workflow and data sets they want to analyze.

This is that detailed architecture diagram. I'm not, I'm not going to go into details, but just as a reference point, there are a lot of flexibility we built into the architecture. There are a lot of scalability also we built into this architecture I think compared.

So at Eli Lilly, we still have our own data center. We still have our own on prem on premise HPC clusters. But the reason we chose AWS for cloud and all the data analysis and our storage as a completely cloud native fashion, the benefits are different several different benefits that we have seen.

One is the speed, it is so much faster for us to not have to worry about write a budget proposal and have to go through the entire capital purchase process at the company and then work with the company and make the order and then have the company to install the actual machine in our data center. That process alone takes several months at least.

And the second is scalability. So we don't have to make a commitment as we would have for on premise, hardware purchase if we have one microscope of crown, and we will have, you know how many GPUs, how many work nodes, how many head nodes we have two or three microscope which we are in the process of gaining more, more microscope of row m and we just grow up in the cloud. So the agility comes the speed agility comes hand in hand in this case, which is really the goal here.

In summary, from the storage perspective, we utilize S3 as a large scale data transfer mechanism to host crow m data. We also use FSx for Luster as the fast scratch space to allow the high performance computing. We use a variety of GPU resources as I mentioned before to optimize for the cost performance ratio. And finally, I believe as I have seen across, not just the Eli Lilly company but across the industry of pharmaceutical companies that the HPC skill applications for drug discovery for AI machine learning will become more and more popular in the future in the cloud.

So with that, I'm going to turn this over to Ian. Have a good day. Thanks Chen. But it's not just customers like Eli Lilly that are doing amazing things. It's not just geared specifically to cryo EM drug discovery pipelines as I talked about at the very beginning HPC is ubiquitous. HPC is a number of demanding workloads.

So one of the one I wanted to talk about again is numerical weather prediction. DTN is a customer that has been working with us on. Really how do they accelerate their weather prediction? Because what they're in the business of is getting that model and that weather report out to their customers so that they can make investment decisions based upon it as quickly as possible. The sooner they get that in the hands of their customers, the sooner that they can trade upon it, the more value they bring to their service and their ability to therefore expand out their cluster dynamically. And when that entire forecast, his process shut it all down is remarkably important to them because they only need those cores even if there are several thousands of them for maybe 40 minutes a day.

I mean, can you imagine the investment they'd have to make and rationalize that, to invest in HPC resources to be used to what the industry would consider a ludicrously low utilization rate if you do that math. But here they've developed an entire business upon it because of the elasticity and the availability of resources on AWS.

Another one that's kind of fun. Because racing is always fun is our friends at Delara racing there. They're doing their computational fluid dynamics. And whereas when you're talking to Formula One as Formula One, and they're trying to evaluate how do they design the race tracks, how do they design the rules for the cars? When you're talking to each of the different teams, you saw Ferrari in the keynote or here with Delara, they're talking about how we really help design the actual vehicle itself to perform as well as possible.

And what's interesting within the rules is then they, they have different restrictions about how much compute they can use and how much they can use it. And so how long they can use it for. And so here we're helping Delara really be efficient with their resources and get the most computation to design the most complex and efficient and performant race cars for their drivers and their team.

Then lastly, this is one that I'm I'm really excited about is Commonwealth Fusion Systems. We've talked a lot about energy throughout this talk and one of the things that's very clear is the world's seemingly unending desire for more energy. One of the clearest ways that we see societies advancing is through their use of additional energy. Unfortunately, that consumption of energy has had some, let's shall we say negative effects on the long term impact to our climate and we're trying to figure out how to resolve that right now and wrestle with those impacts.

So, one of the exciting potential areas of providing long term sustainable energy is with fusion. And yes, this is one of those things that's been out there for a while. It's been in various movies and it's always kind of five years away from being five years away. But Commonwealth Fusion Systems is doing some incredible work on the HPC resources on AWS to try to make this a reality to provide sustainable clean energy for their customers.

So it's not just our customers though who say that AWS is a great place to run high performance computing. Kind of the industry gold standard is the HPC Wire awards. And one of the things I'm most proud of is the fact that we have consistently for my entirety of time at AWS won the readers choice awards. So this isn't some editors went off in a back room and cooked off solution. This is the readers of HPC Wire voted and said of all of the clouds, the best place for me to run my HPC workloads is on AWS.

And for me again, Amazon customer focus, we started out this talk talking about how we work backwards from the customer. That's what tells me if I'm doing my job. If the customers say that we continue to be the best place to run their HPC workloads.

And why does all this matter? Because these machines that were developing these infrastructures were developing the networks, the storage devices, all of these at the end of the day are just there to empower the way smart people that all of you employ. At the end of the day, it's to help that research scientist that engineer get work done, get work done performantly and get work done efficiently and at the price performance that you want.

So again, my challenge to each one of you is ask, what will you discover using HPC resources on AWS? I wanna thank you for coming. If you're interested in more detailed sessions, again, it's hard in a session like this to really go all the way into depth about all the capabilities we have in high performance computing on AWS. But I strongly encourage you take a picture of that go off.

Again, it's hard to believe it. We're, we're only at Wednesday. There's still tons of sessions to, to be had. And so invite you to please check out those additional sessions and if you have any questions after the session, Chen and I will be out in the hallway and we'd be happy to spend time talking with you. Thank you.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值