Delivering low-latency applications at the edge

I am Pran Chara. I'm a Principal Park Manager at AWS and I specifically focus on AWS Local Zones. With me, we have Rob Czarnecki who is the Head of Product Management for Hybrid Cloud and he focuses on Outpost amongst other services.

I'm really, really excited to talk about delivering low latency applications at the edge, especially because, you know, I've seen the journey of launching Outposts and Local Zones in 2019. And since then, we have brought a number of services and features to these offerings and we also extended these offerings to multiple locations across the globe.

So before we start, I will walk you through the agenda today. In this session today, we'll discuss different use cases where low latency matters and then we'll dive into Local Zones and Outposts and understand how we are enabling customers to serve low latency use cases in both metropolitan areas as well as on premises using Outposts and Local Zones. And then we will discuss Riot Games experience with Local Zones and Outposts and how they're using both of these offerings to enable low latency applications at the edge for the gamers. And then finally, we'll end the session with Q&A for any questions that you have from the audience.

So what are the specific use cases for which customers are using hybrid edge offerings? We've seen a number of customers from different verticals and industries using hybrid services to unlock variety of use cases. In this continuum, we have enterprise migration use case on the left hand side, which includes everything from the back office applications to these applications where customers have been working to move these applications to the cloud. And what we've heard from customers is that given their interdependencies and their size and constraint, it can be daunting to move these applications to the cloud in one go.

And then on the right hand side, we have low latency use cases which include things like real time gaming, machine learning, IoT and content streaming and things like media, media, content creation as well. And then finally in the middle, you see use cases which require data to be in a particular location, either because of residency, regulatory or local data processing needs.

And what we've heard from customers is that when it comes to all the use cases that I spoke about in the previous slide, what customers want is the consistent experience that they used to in regions. So what it means is that they want the same APIs and services, same tools for automation, things like security controls and the other operational things. The expectation there is that they should have the same experience that they used in regions today.

And with this consistent experience across regions and edge environments, what we've seen is that customers are not only able to accelerate the innovation, but they are also able to manage their global deployments using the same, same skills and the tools sets that they used to.

So now that we understood the overall spectrum of use cases where hybrid services are being used today, I'm going to dive deep into low latency use cases specifically and understand how various AWS offerings are unlocking these low latency use cases today.

To get more context in the next few slides, I'm going to walk you through what low latency means for different verticals and industries.

The first important low latency use case is the real time multiplayer gaming use case where companies like Riot, Epic and Supercell, they deploy the game servers closer to the end users across the globe. And you know, I'm a gamer myself and when it comes to low latency gaming, if there is a lag, if there is, you know, a different experience than what I see for, for the competitor, it, of course, you know, increase my chance of dropping from the game and that's where the latency becomes key for these companies like Riot and Supercell.

So depending on the game, a latency of 20 to 40 milliseconds is considered ideal for a good gameplay experience. And given the low latency requirements, these customers tend to deploy their applications across multiple locations so that they can be closer to their gamers across the globe.

Next, we have media and entertainment use cases where customers like Netflix who are creating beautiful content for us. They, they have these expensive artist workstations in specific metro areas where they have animation hubs like Los Angeles. And they tend to use these workstations for things like video editing and live production. And for these artist workstations again, latency to having latency again becomes very important to have a jitter free experience for, for these artists.

And specifically talking about the latency here, these customers have, in fact, even lower latency requirements, they need less than five millisecond latency from their animation hubs and offices where the artists are to where these virtual workstations are running so that artists can create the, the beautiful content that we used to without really going through any jitter, jitter, jittery, jittery experience.

So now we are going to discuss a completely different vertical of financial services where we have customers like Nasdaq who run financial exchanges in places like New York and considering their real time data needs and their distribution needs. These customers again have requirement for ultra low latency to the compute where they're running things like real time risk analytics and market analytics for trading data.

And it's not just financial industries, we have customers from the healthcare as well who have requirement for low latency compute. These customers rely on various management softwares to empower things like radiology, health systems and clinical research. And when it comes to these use cases, these customers need again, rapid low latency access to the high quality images for things like radiology so that the radiologist can focus on improving the patient outcomes without really worrying about these healthcare systems.

And then finally, another great example is the enterprise migration use case where as I said, there are enterprises across the globe who are running their workstations or their different workloads on premises in their data centers. And many of these customers told us that it can be daunting to migrate portfolio of interdependent applications to the cloud in one go.

And as a result, many of these customers tend to like the hybrid environment which provides ultra low latency communication between their on prem as well as where they're running their workloads on the cloud. So they can migrate their applications incrementally without really drastically changing or refacing their applications.

So now that we have discussed different use cases for low latency, how are we really serving these customers on AWS today? And what we're doing here is basically we are bringing AWS to wherever our customers needed.

On the left hand side, we have AWS regions which are available in 32 locations and they tend to meet the latency needs for majority of the customers. And then we have Local Zones and Wavelength that are in particular specific metro areas which basically allow customers to have the benefit of compute and storage closer to their end users and their on premises without the need to operate their own data centers.

And then we have AWS Outpost, which is basically meant for customers who need to manage their workloads on premises. Outposts basically extend AWS infrastructure to virtually any place that you can think of, you know, things like data center on premise facilities for a truly consistent experience that, that customers are used to in region.

And then we have Snow family and IoT which I won't be covering in this session. We have other speakers who are going to talk about it a bit more in the next few sessions.

In this section, we'll talk about low latency use case and specifically focus on metro areas and, and telco networks where I'm going to talk about how Local Zones allow us to bring cloud closer to more customers and end users and and allow us to meet these low latency use cases in metropolitan areas.

So before I dive deep into Local Zones, I think, let's let's first understand what they really are logically. You can think of Local Zones as very similar to Availability Zones. It's just that they lie in a different physical geography.

So what we've done here is we've built data centers in particular metro areas where AWS deploy and manage the infrastructure in these large metropolitan areas. And these Local Zones are connected back to the parent region using multiple redundant high speed links so that customers can continue to leverage the services that they used to in regions as well.

Similar to regions, Local Zones also provide the consistent on demand elastic experience that customers are used to with the pay as you go pricing and then you get the same security core services and developer experience in Local Zones.

So now that you've understood what Local Zones are, I think the logical question of course is why did we really launch them and the answer sort of lies in developer expectations.

If we look at this visual cue here, let's say I'm, I'm a gamer who is in the Houston metro area and I'm playing a game, you know, previously, you know, when we launched Local Zones before that, we had four regions in the US. So if I'm in Houston, the nearest region for me is the North Virginia region, which is, of course, you know, more than 10 milliseconds away from Houston metro area.

What it means is that for a majority of the end users across the US, like me, who are, who are playing these games, if they're using these regions for the gaming sessions, they get over tens of milliseconds of latency across the contiguous US.

And then what we did here was we basically launched Houston Local Zone which brought compute closer to end users like me, which means that now I can access the gaming session with single digit millisecond latency, which allows me to basically have the consistent and real time experience that, that, you know, the expectation is from the end users.

So where do we have these Local Zones beyond Houston? If you focus on the US here, we started launching Local Zones in 2019, our, our first Local Zone was in, in Los Angeles area. And then since then, we have expanded our presence to 16 more metros and the intention is to provide single digit millisecond latency experience to users across the contiguous US.

And we're not stopping here. Building on the success we had in the US, we are also launching Local Zones internationally. So two years back, we started launching Local Zones globally. And as of now, we have 17 Local Zones available in various metropolitan areas outside the US. This includes places like Auckland, Bangkok, Lima, Manila, Santiago, Taipei and others that you've seen in yellow color here.

And then we also announced 15 more Local Zones in many more countries and metros which include places like Colombia, Greece and Netherlands, which basically again would enable customers across the globe to provide low latency experience to their end users and on premises.

So now that you have understood why we launched Local Zones, where they are available, a key construct of Local Zone is of course what services are available in these Local Zones.

So when we launch these Local Zones, we started focusing on bringing services where latency matters. So we brought compute services like Amazon EC2 for compute, Amazon EBS for block storage, ECS and EKS for containers. And then things like Amazon VPC, Route 53 Geo proximity routing, AWS Shield and Direct Connect for the consistent networking and connectivity experience that customers are used to today.

And then the ones that you see in the gray here, these are some additional services that we added based on the feedback from customers in specific industry hubs and metros. So we have Fsx, RDS, EMR, Elastic Load Balancing are available in certain metros.

And the intention is to keep bringing more services to more locations depending on the use cases that we see in those metros.

So we just discussed that local zones is a logical extension of regions and that is also reflected in the pricing here. So if you're using local zones, there is no fees for enabling local zones.

Uh two instances in local zones have on demand pricing and you can also purchase things, plans to get the benefits of uh longer term commitments with, with the discounts and then instances in other a w resources running in local zones have their own prices and they vary across locations, how it usually works for reasons as well.

Um and then when it comes to data transfer, the data transfer prices are very similar to how it works in a. So if you have an two in the local zone, sending traffic to s3 back in the parent region, the data transfer fee, of course, just like how you're able to access s3 back in the in the region from a z without paying any data transfer charges.

And then beyond on demand and savings plans, we also have spot pricing available in certain metros and we continue to expand that presence more globally as well.

So we have discussed what local zones are, why we launched them where they are available and the pricing model, let's sort of understand how local zones work in practice.

So to get side, you just need to enable local zones for your account and you can do that from the console and api when you enable your local zone, you can extend the vpc from the parent region to the local zone by creating a subnet and linking it to the local zone. And then you can start launching resources like amazon ec two and others in that subnet and start building your applications to meet these low latency needs.

So this architecture is a good representation of what we discussed in the previous slide here. You will see the vpc in the us oregon region, which is extended to the local zone in seattle. And what we've done here is basically create a subnet linked to the seattle local zone and the vpc has been extended and all the things like route tables and security groups have been taken care of.

And the fact that local zones have their own connectivity to direct connect and to the internet gateway. It allows you to meet your low latency need for both end users and on premises facilities.

In addition, i discussed earlier that local zones are connected back to the parent region using redundant high speed backbone connections, which means that you can use things like amazon s3 and r ds back in the parent region just like how you use it for a today.

So in the beginning of the presentation, i discussed examples where latency matters. Uh now that you understood local zones, i'm going to provide you a couple of examples where local zones are solving these business problems and the impact we've been able to create with them as well.

Uh the first good example is the netflix use case here where netflix have been thinking about moving their workloads from their animation hubs in, in places like los angeles to the cloud where they were running artists workstations. And when it comes to these artists workstations for media and entertainment customers latency is is key to having a jitter free experience.

Um so what netflix did here was they basically used iconnect to establish connectivity between the animation in l a to the local zones in l a as well, which allowed them to have low latency access to the computer, get less than five milliseconds of latency. And the artists were able to use these workloads on the on the cloud and run these virtual workstations for things like live production, media editing without really um without really thinking about how to scale their workstation without really going through any jitter jittery experience.

Another great example is the mind body. Uh example here where a mind body was operating their own data centers in l a metro area. And what they told us was that it can be daunting to migrate portfolio for independent applications to the cloud.

So what they did here was they also used i connect to establish hybrid connectivity between the their own data centers and on premises and the local zones in in l a metro area. And in turn, this enabled my body to establish a hybrid connectivity, which basically meant they could now move their applications incrementally without really refracturing their applications too much.

So now that you understood local zones and various business problems that we're solving. I'm going to hand it over to rob who will discuss how customers are using local outposts in addition to local zones to deliver these local latency application experience on prem.

Thanks rob. Thanks for now, welcome everyone. I'm rob zarnecki. I lead product management for many of our hybrid cloud services and spend a lot of time focused on outposts.

So as pav described, we start in the region, we can extend out to local zones which will always be available and, and online in more metro areas than, than we are able to build regions, not that we're slowing down at the pace at which we build regions, but then we can go even farther out with aws outposts to bring them on premise, to bring them to metros where the local zone isn't yet online because we need to extend our network border or things like that.

This is super, super useful for customers that have their own data centers. Whether that's in a colo facility, whether it's in a manufacturing plant, maybe it's a telco edge site where there might never be a local zone close enough or it has to be the compute has to be adjacent to the customer's existing technology.

Uh so today we have the outpost family two different flavors. If you will, we have a 42 u rack uh version and a one and two u server version. Uh the rack version is the very same rack that we use in our regions. We manage everything inside that rack. Think of it as the smallest aws footprint you can bring, we roll it into your facility, connect it to network and power and we manage everything inside the servers which are even smaller can extend that they could be in a data center and they could be co located in a rack with technology that you already have or they could be outside of a data center, perhaps in a retail store in a back management office in an, in an it closet.

They're just going to plug into, you know, a regular outlet and, and allow you to bring that compute to places where a 42 year rack fit or where you don't need 12 or 15 servers, you only need one or two or three, never deploy 12 is always the best practice.

Now, outposts much like local zones, they allow you to bring aws to you, right? And as we built our hybrid services, we listen to so many customers probably not unlike yourselves who told us that the value of writing code and building application using a consistent set of apis tools and services was huge. That means there's no need for their developers to be, you know, versed and trained and certified and fluent in two different sets of tooling, one for the cloud and one for on prem.

So with aws, you're going to write code and you're going to build your applications using that same set of tools and you can deploy them anywhere.

Uh the other thing that customers shared was that, that they didn't want to manage the infrastructure, as we've been saying for years, that's undifferentiated heavy lifting. So we're going to manage that for them. Of course, uh think about the shared responsibility model that we have in our aws regions and local zones, that same model applies at the edge.

The only difference is you as the customer or your co o partner or anyone else for that matter provides some physical space, power and a network connection. We'll manage the hardware, we'll replace it when necessary and we manage it all using the same tools and systems that we use in the regions. We treat this as an extension of our regional fleet and the other nice thing about it is we're going to give you a single pane of management, uh single pane to manage all this, right?

So once you order your outpost and you initiate those resources, your two instances, you're going to see them as part of the regional console. So you can manage your fleet across its entirety or you can narrow in and you can use all the tools that we have such as cloud watch and cloud trail and, and, and all the other monitoring and alarming services that we offer in region for your edge deployments.

Now, now what this, what this huge leap has allowed customers to do is to just simplify the complexity of, of managing their it um it's, it's allowed them to amplify their developer productivity. Uh it, it's allowed frankly people on prem to move at the pace of the cloud. And to bring that cloud first approach to that mental model of designing and focusing your resources and your investment at the application level where you're going to differentiate your businesses and the services that you provide rather than needing to also spend resources and time and money and everything else, managing the infrastructure that those applications sit on top of.

So to go a little bit deeper on the outpost rack. This was the first service that we launched, announced it back at reinventing in 2019. It's an industry standard 42 u rack. This is not small but fits, you know, fits in a standard position in most data centers.

We build this at our manufacturing sites and we bring it to you fully assembled. Uh we plug it into the, we send an advanced team out to make sure you have the right power port to make sure that it can fit around the corners. The floor is rated these things when they're fully loaded, weigh about £700. So it's a nontrivial delivery, bringing in a wooden crate with tamper resistant sensors to make sure that everything has been safe and secure through its entirety of its journey.

Um we plug it in, we offer redundancy across the board here. Everything other than the compute servers which you choose yourself. There's at least two of we have redundant power as power is one of the most common failures in compute racks, we have redundant networking switches, we have redundant networking connections and we're, you know, constantly monitoring and managing everything inside this rack to make sure that it is online for you.

Uh you can see right here uh a little, little picture of the, the 15 to 5 to 15 k va power supplies that we provide to you of. We recommend you connect these two different sources. So whether that's a generator and one feed, uh some facilities have two independent feeds that don't have a single point of failure. Uh that's going to increase the, the resiliency of your system to failures.

The same point for network, we recommend two different network connections with independent paths. So they don't just reconverge on the same piece of fiber 10 miles down the road which is subject to a storm a backhoe. Uh any of those, you know, host of things that can disrupt network connectivity.

Um and with the compute, you, you choose how much compute you put in this rack and what types of compute you put in this rack. Again, we recommend extra. Uh and, and we can partner with you to, to strike the balance between how much compute you need, how much space and power you have. Uh and tailor the resiliency of this outpost for your specific needs and your budget

When you think about how, how that looks logically as an extension of the region, it's very similar to the picture that Pav showed about the local zones. You just extend your VPC for those of you that are, that are deeper into the networking space. The Outpost resources will show up as a private subnet available only to your AWS organization within the region that you connect it home to so that these are managed by the regional control plane much the same way Local Zones are built and you can, you can have a VPC that includes resources in multiple Availability Zones within that region, maybe even within a Local Zone that's connected to that region and within the Outpost.

Um now what the Outpost brings to you is, is two new logical networking constructs. One is called the Local Gateway. And that's the case connection to your local network, whatever is in your data center, that could be the internet, it could be other technology. And there's also another logical network construct called the Service Link. And that's our connection back to the home region, the heartbeat, if you will, where we're pushing your your AMIs across and uh sending metadata back and forth to monitor the health of the system. Uh so you will see us using a little bit of traffic on your network and that's, that's just us monitoring the hardware uh services.

Again, you know, we mentioned using the same tools is a huge part of the value proposition here. So we've, we've got about a dozen services that run locally on the Outpost. Of course, the Outpost resources can use services back in the region. You need to be mindful of what is the reason that you're using the Outpost? Is it data residency? And does some of that data have to stay on the Outpost? Is it latency? And can there be workloads that are not within the critical path that could use something like Lambda back in the region?

Um we have our core compute services. We have our uh you know Elastic Block Storage. We have containers, both Amazon Elastic Container Service and Elastic Kernes Service. We have a local resolver from Route 53 Local object storage from S3 load balancers, databases, analytics with EMR. And we're constantly looking at what we can do to better serve customers and make that experience the same.

One of the important things that we look at here is partners as well for two reasons. One, we have a huge sloth of Amazon partner, network partners who are building solutions that are API compatible with AWS services, which means that satisfies our tenant of a customer, being able to write an application and run it at the edge or in the region, even if the underlying service at the edge might be provided by a partner instead of uh by AWS.

Um the, the other thing that we look at is customers are also migrating from workloads that are running on what we call commodity hardware on prem and they're using many of these partner solutions. So if the partner solution that is already built into their application that they're familiar with can come across to that edge device that just accelerates the migration and takes some potential friction out of the way there.

Uh now, Outpost servers a little newer than racks. Uh super super excited about this space. In particular, we've just been at this with the servers for a couple of years. These are going to fit into a standard 19 inch rack. We have a two u server that's uh that's built on an intel processor and that small one u server is actually built on a Graviton processor. This is the first time you can run Graviton workloads outside of our data centers. And what customers are starting to do with those is super super exciting and t for more news on that, they don't require nearly as much power as the racks. They support a 10 gig uplink speed. You can run one gig if you prefer rather than that local gateway that i talked about on the racks. We use a layer two network interface here and we call it the Local Network Interface or the LNI so slightly different network topology. And that's just because this is one server in the rack, we're managing multiple switches and we're able to present that in a slightly different fashion on your, on your network.

Um servers, how they work very similar to the racks, right? Just in two instance, private subnet within the home region, you can have a VPC that spans multiple zones, includes your Outpost server, include your Outpost rack, you can design this and configure it. However, it works for you. There's no wrong way to do it. Uh the, the, the key part of it is this, this works just like the region. This is just another two instance. And that was, that was a very deliberate design choice that we made as we built the um build the service.

Um you know, you're going to communicate directly with those resources from your, your local network across that, that L and I or the Local Network Interface, we are still going to have a Service Link connection that logical uh you know, heartbeat home to the region.

Um and, and you're able to, to spin up compute resources and access those locally, whether you're running control systems in a factory or you know, elements of a smart retail store or, you know, any other use case, we, we certainly have plenty of people, both eight us employees and customers running these things in their basements and in their their home workshops, just experimenting and, and i think that's a great signal of the momentum that we're building with the server business and how easy it is to get these things out there and, and bring two to the edge in terms of availability, we can deliver an Outpost, either a rack or a server in 76 countries and territories today.

I know there are a few more than 76 countries and territories in the world more to come. Um and the way we think about this is there were the, the, what i'll call them, the easiest set of countries where we've been doing business for some time. There was a mid band set where it's, it's really just a matter of, of us having the correct registrations to do business bank accounts approvals, uh et cetera. And, and now there's a longer tail of countries where sometimes it takes a little bit of time to get in front of the right folks and introduce them to us as a, as a company and our business and, and obtain all the right certifications.

Uh if there is a country where you need to use ec2, you would like to have an Outpost that is not on this list. Uh let us know, reach out to your account team, you know, reach out directly to, to me or Pav or, or any of us. Uh this week, uh we're happy to prioritize the, the the rest of the world quite literally is just a prioritization exercise and we're going to continue taking feedback from customers as we do that.

Um a couple of customer examples. Pranav mentioned this earlier but Nasdaq are, are, are now running three domestic exchange markets on Outposts based in a data center in the New York metro area. This is a super exciting development and we um we had the CEO of Nasdaq, Adena Friedman on stage back in 2021 announcing a multiyear partnership with AWS. Uh it's been a fantastic partnership.

Um in, in December 2022. Uh they announced the completion of their first migration mis re invent by just a couple of days, unfortunately. But, uh but that was, that was a big win within 12 months from us beginning the partnership to running one exchange. And just two weeks ago, Nasdaq announced that they had successfully migrated their third exchange and that's processing about 12 billion messages daily with extremely low latency and all the fairness and other regulatory conditions that are, that are required to run an exchange in the US.

Um another, you know, quite different use case, but just as interesting, Mercado Libre, they're the largest online commerce and payments provider in Latin America. Uh the business has just grown really, really quickly. Uh and, and now they span 18 countries across Latam and more than more than 100 million active users.

Uh now as they've, as they've grown at that pace, they really dug into their operational technology workloads with a full looking approach on how can we build for the future and how can we build for scale, which are great things to think about when you're growing at that clip. They've partnered with us and deployed a number of these OT workloads on Outposts, things like automation systems in their facilities. Wi fi access point controls, physical access, you know, badge systems and other other things that are used to protect their supplies and separate folks.

Um many of these running on Outpost racks, they're now starting to play Outpost servers in some smaller facilities for some other workloads that that might, might work just as well and provide even more flexibility on the servers. And Outposts are really giving Mercado Libre the ability to run their application seamlessly and modify or manage, you know, with a single pane of glass across all of Latin America and, and wherever else they extend beyond that, which is, which is super exciting.

Um and tying it back to gaming. Riot Games was one of the first customers that we spoke publicly about as, as we built the Outpost business back in 2020. Uh I had the good fortune to be personally involved with our work with Riot um through the heart of the pandemic. And uh they've just been a phenomenal customer and partner.

Uh Riot has selected AWS as their, as their official cloud services provider to give you a little bit of background on Riot. Their mission statement is that they aspire to be the most player focused game company in the world. They want to put the players at the heart of every decision that they make and the player experience and the relentless focus on that is so so clear.

Um three Riot's most popular titles are League of Legends, which is a multiplayer online battle arena game. Uh they have a game called Valorant which launched in 2020. It's a five on five very tactical first person shooter experience. Uh and then they have uh another game called League of Legends, Wild Rift, which is a mobile version of League of Legends, uh which, which has a longer history uh built from the ground up to, to optimize for that mobile game player experience.

Um I wanna talk a little bit about Valorant because that was where we first partnered with Riot to deploy Outposts. Um Riot focused so deeply on, on that character based first person shooter experience. They have, they currently have more than 14 million active Valorant players worldwide and a number of agents in game modes to, to change the experience as, as Riot built Valorant.

"Uh they were very clear on their tennis. They wanted to ensure the matches were fair uh five on five which is different from some other games like uh fortnite, which have 100 people in there.

Um fairness mattered. They wanted to support a wide range of PCs and, and to ensure that every customer's experience was smooth and responsive and rewarding. They really wanted to raise the bar on the defensive aspect of the game as opposed to the offensive aspect of the game. So they introduced something called peer advantage in Valorant.

Um and this was really to, to the uh to their goal of giving that defensive advantage in the game over an attacker. Uh the folks at Riot really knew they needed to solve the problem of, of not setting it up such that the defensive player exposed themselves to an attacker and, and leveled the playing field in events.

So if you look at this animation as it runs through, there's a, there's a moment in time where that defensive player who were looking, you know, looking over their shoulder, if you will uh can see the attacker before they expose themselves, that gives that, that, that gives them a chance to act first and not always be in a position of responding um to, to simplify a little bit with this, this animation to illustrate the problem, we need to sort of break down that there's a, there's a fundamental unfairness in the reaction time here.

Um now, one of the ways that Riot focused on resolving this issue is by focusing on a very low latency for the game player experience as you can see in, in these uh in these visuals here, as the latency reduces the impact of the peers advantage also reduces. And that's because the messaging between the player's PC and the server happens faster and faster.

Uh so I did a tremendous amount of research and they determined that with a combination of, of a high frame rate or what they call a high tick, uh their target was 100 and 28 hertz that a client uh should be able to run that on most gaming machines. And, uh and they also wanted to target a latency uh target of 35 milliseconds. And they, they, uh they, their hypothesis was that they could reduce the peer's advantage to a level where it was minimally impactful for gameplay and create a, a tactical shooter experience that they were looking for.

Um so they set two targets for themselves that 70% of Valorant players could reach a server location that is 35 milliseconds or less away. And that 90% of players would be 50 milliseconds or, or less away. Uh you can see where I'm going with this given this is an edge conversation, right? The latency target was a lot more ambitious than, than the targets they had for League of Legends where they were, were open to tolerating farther latency.

And the only way that Riot was going to hit, that was by finding a way to bring their game servers closer to the players that were playing those games and optimize the routing as much as they possibly could.

Um um I'm pleased to say that that Riot was able to do both using our edge services. Uh they first knew that launching this launching the Valorant game on AWS was the way that they wanted to go. So they first looked across the AWS regions and, and, and determined, where could they run the, where could they run their game servers? And how, how did those latency, uh, maps cover the, the players that they were using?

Uh, this was a, you know, fairly obvious place to start. Um and, and of course, back in 2019, there are probably a couple fewer options than, than there are today.

Um the second choice after a region was of course a local zone uh because it's so similar to a region for Riot for all intents and purposes, it works the same. And since all the game servers are running the same software and all the services that the Valorant game server is built on are present in the local zone. It was very easy to also extend to the local zones and further optimize that latency towards their desired target.

But the real thing that helped Riot hit their target was outposts. Now, this was the third choice, right? They weren't going to deploy an outpost where they could use a region or a local zone, which makes great sense. Um because they had to carry some additional overhead, they had to have a, a data center, they had to manage the capacity.

Um but it did allow Riot to bring game servers to locations that had no AWS presence and serve gamers that couldn't possibly get uh get within their desired latency target um using just regions and local zones alone.

Um and if you take a look at the, the map here, just uh just focusing on the US when, when they started building Valorant. Um Ryot also had a bare metal data center in Chicago. Uh they were running in US East 1, US West 1 and US West 2. Now, if you think about where that is the, the latency across the US alone is, uh there's no way that's going to be fair, right? That there, there are places all over the country that that will be quite, quite a distance from, from those data centers.

So they came in and they put outposts online in, in Texas and in Atlanta. And, and the reason was they're focused on gamers in, in Florida and Texas. And if you look at the, the map right here, you know, certainly not within the 35 millisecond target, but many of them can't even satisfy the 50 millisecond target. But when you overlay the outposts in Dallas and Atlanta, both of those pockets of gamers are now well inside the 35 millisecond target, even with some sub optimal network routing.

Uh so Riot was able to achieve their goal by putting outposts in those places. Uh now, of course, since then, uh local zones have come online in more and more metro areas and Riot's been able to shift their workloads as appropriate and when appropriate.

Uh just to show a little bit of additional color here, here's some data from Riot's uh telemetry pipeline. And you know, if you, if you look at what North America look like. Um and then, and then in Europe, they also brought outposts online in Madrid and Warsaw. The, well, let me go back.

Um the, the latency and the, the, the segment of customers that were under 35 milliseconds increased by, by a meaningful clip both in the US and in Europe.

Um now the next part, once we brought all this capacity online for Riot is Riot needed to make sure they could route the network traffic efficiently. And Riot had been building their own network which they call Riot Direct since back in 2014, uh just based on the size of their operation and the popularity of League of Legends, they wanted to help manage performance issue that, that play players for years.

And their hypothesis was that if we remove inefficient routing via third parties wherever possible and just directly peer with as many internet service providers as they could, that would lead to a better game experience. There are a number of paths that customer packets can take to get from an ISP to a data center that's just connected to a single provider and that's also exposed to other transient events like adidas or something similar that might not be targeted at Riot's network might be targeted another customer but still has impact.

And the goal we're here to was, was to ensure that Riot players are on a direct path or the closest thing that, that they can achieve between their s their PC and their game server.

Um so Riot Direct now spans the entire globe. It connects Riot players to Riot game servers and also Riot studio locations where the artists are building these games uh to AWS regions, local zones and outposts where they have them deployed.

Um so that's uh that's our deep dive on, on Riot and their edge use case uh back to Pav to close it out. Yeah. So uh that was a great example of how Riot is leveraging the continuum of AWS offerings wherever they need it.

Um so before we conclude, I'll walk you through a few more topics that apply to both local zones and outposts. Um and the first one which Rob touched upon is, is a common question that we get from customers is how can I run these partner solutions on local zones and outposts? And the answer is similar to regions you can find buy and deploy software solutions for local zones and outpour from the AWS marketplace.

So you see the, the slide that applies to the broader a s marketplace on uh on, on, on AWS. Um and you see these popular eight categories of software which are available uh across, you know, things like data products and, and professional services.

These include softwares from key ISVs such as Gen X Couchbase and more. And what we are seeing on the on the edge side is that we continue to see more customers who are leveraging more marketplace solutions to build their low latency applications there.

So now that you've understood both outposts and local zones, of course, the logical question is how do you choose the right infrastructure for, for your for your applications? And the answer again is very similar to how Riot approached it. You know, it really depends on two factors. Where do you want to run your infrastructure and your network latency needs here.

Um so if there is a region available closer to where you want to run your infrastructure, of course, region is the right choice given, it provides the breadth of services and when there is no region available closer to your end users or your on premises, but there is a local zone that can meet your latency needs, you should go ahead with the local zones there.

And then finally, if you have ultra low latency needs that cannot be met by regions or local zones which are available nearby, then outpost is of course the right solution for you. This again, i said, you know, it aligns with how Ry approached it and they were able to build their application across the globe using regions, outposts and local zones with that uh we, we are into our session.

Thank you all for coming to the session today. As always, we are listening to your feedback and we will continue working on bringing more service and features to these offerings as well as bringing them to more locations as well.

Um if you like this session and want to learn more about delivering low latency applications at, at the at the edge, we do recommend you to check out both chalk talk and workshop uh which will get into more nuances and details of how it works in practice.

Thanks again for joining us. We are going to use the next 10 minutes for any q and a that you have. If you have any questions, I do recommend you to come closer to the mic and then I can repeat if the question is not clear.

Um again, don't forget to um complete the session survey after, after uh after the session."

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值