Run and manage Amazon EKS clusters on premises

All right. Well, thank you all for coming. Um took a shuttle probably had to ask 15 people how to get here like I did. So I really appreciate you all. You all coming to hear us talk today.

Um my name is Chandler. I'm the GM for all the EKS Hybrid and uh Edge products at AWS. And Lee. My name is Lichun Li. I'm the product manager for EKS Anywhere. Nice to meet you all.

Great. I'm gonna kick it off. I'm gonna go over all the high level stuff about Anywhere and how it fits into the overall Kubernetes strategy at AWS and kind of walk you through really the whole EKS portfolio and then I'm gonna hand it off to Lee Chun in 20 or 30 minutes here and she'll kind of deep dive into EKS Anywhere. Ok, great, cool. Oh, I just did the agenda. So that was easy.

Um all right. So why did we first build EKS Anywhere? Um there's a bunch of customer use cases that we were targeting. Um and I'm going to go through all those. So the first one is low latency applications. You might have a gaming company that needs to service their gamers and their customers in an area where there isn't an AWS region there yet. And they need low latency to the game servers, especially for competitive games. So you might want to be servicing Egypt for example. And we don't have a data center in Egypt yet. So you work with a partner and you rack some, you know, some nodes in a data center in Turkey to service your, your Egypt customers. And you, you still want to run Kubernetes in that data center. So we get that, we get that request a lot, especially in the gaming, but for other use cases as well for low latency applications.

Um the second one, which is one, we see a lot is actually just wanting to modernize your applications in your data center before you move into the cloud. So there's a lot of customers that we talked to who have purchased a bunch of servers and they have them in their data center. And you know, they don't want to just have sunk costs there and they're advertising them over, you know, a certain number of years and they do know they want to move those applications one day to the cloud and they want to modernize them, but they don't want to lose the sunk cost that they put in those servers in the data center. So that's where they're like, hey, can you help us run Kubernetes on prem on these, you know, commodity hardware dell boxes, leno, lenovo, whatever in our data centers. So we don't have this s cost. So that's another reason.

The third one is data residency requirements. We see this a lot with financial institutions. We also see this with insurance, especially in Europe or other heavily regulated uh geographies. So customers have a reason to keep their data on prem it might be a law, it might be an internal policy that's hasn't changed yet. Um it could be a lot of reasons but they want to keep customer data or their own data on prem in their own data center. Uh so that's another reason.

The fourth is edge specific workload. So you see this with five g, you see this with manufacturing, you see this with, you know, fast food joints. Uh they have 1000 stores and they want to run some kind of workload in every single store. It might be a small box, it might be a group of boxes in a broom closet. Um but they want to run Kubernetes there and they want to run containers and they need help doing that.

The another one is cloud bursting and recovery. So this is more aspira aspirational for us. Um it's somewhere on our road map but this is where customers have a lot of workloads. Oh, sorry, a lot of capacity in a data center. But for black friday events or you know, they launch a game and they have a ton of people all of a sudden using the game, they want to be able to burst some of that capacity of the cloud to fill that need. Um and then we, we i touched on, ran quickly, but we're going, we have a whole slide on that later.

The r space is, is really heating up, especially with five g coming online. Those are all edge deployments. And by and large, the telco industry has really adopted Kubernetes and containers to help solve that problem. So they need help there. And lastly, you have workloads that, for example, we had one, a manufacturing workload. They want to run spark models at their, at their plant and they want to run them on Kubernetes. And so how do we help a customer run spark models uh at a plant on Kubernetes. That's a tough problem as well, right? So those are the main use cases we see and there's other ones, you know, there's cruise ships in the middle of the atlantic, there's offshore oil platforms in the gulf of mexico. Um there's satellites, there's all sorts of places and we're seeing more and more that customers want to run Kubernetes really everywhere. Um thus the name EKS Anywhere.

So what other problem were we really trying to solve here? Um we know that Kubernetes is a very complex platform and there's a lot of table stakes features, especially some that Leach and will walk through later that we just need to have on that platform. But really the differentiator for us for EK SS anywhere is it's going to be the most consistent experience you can get with EKS in the cloud. That's possible of any Kubernetes platform. And that's obviously a promise we can make because we own both those products.

Um so we want, we, the cli is going to be the same where we have things on our road map to make the a p as consistent as possible. And when possible, we also have consistency between the actual operational workloads on both platforms. So when possible and it's not always possible, but we do our absolute best to make these two environments as consistent as possible. Are they going to be 100% consistent? No, that's not possible. I mean, we have things like vpc c and for the networking with EKS in the cloud and we have Psyllium on prem because vpc cn doesn't exist on prem, right? But we can make them as consistent as possible. That's something that we can do. And that's, that's really our, our main goal with EKS anywhere uh is to provide that consistency and really provide it across the entire portfolio of EKS products.

So I'm gonna walk you all through this. Uh it took me two years to be able to explain it easily, but there's a lot of EKS products. Now at Amazon, you can run Kubernetes in a lot of ways.

Um on one side of the spectrum, you can run Kubernetes in a region and you can do that in a, in a few ways, you could run Kubernetes in a serverless way. Just give us your container. We'll put in EKS Fargate and we'll handle all the scaling and the uh you know, node management for you or you can use EKS managed node groups. Again, we help you with auto scaling, but you have a little bit more control on which nodes you pick and it's less of a black box.

Um and then you can just attach your nodes yourself to EKS and you know, set up your own auto scaling groups yourself. So you have three ways to run EKS in region and a whole bunch of other tools to help you do that. But then when you start going out of region, where can you run EKS, we can run them in local zones and you guys all probably have heard about local zones. And that helps with some of the latency requirements that i talked about earlier. And it's the same EKS in the cloud as it is on local zones. Exactly the same EKS um same API s, same everything. You could also run at wavelengths again, same EK ss that you're running in the cloud as you're running on wavelengths, excuse me.

And then you can run EKS on outposts. So outpost being our, our rack that you can put in data center. And again, it's the exact same EKS that you're running in region on outposts. And there's two modes that you can run KS on outposts in, you can run one mode where it always has connectivity to the cloud. And then there's another mode where you can run EKS where it can survive up to seven days of disconnect. So you might want to do that for weather events or, you know, some construction cuts or power line or you have some, some dr event and you want to run it in that mode as well.

So those are all the ways you can run EKS where it's, it's exactly the same as running it in region. Um it's the same exact same deployment. Um but then when you start going into your data center, that's where EKS Anywhere comes into play. And again, it's as consistent as possible with the region, but it's not exactly the same product. It's an open source product. You guys can check it out in github, um fully open source. And this is where ETS is, is actually meant to run on commodity hardware.

So when you can't run it on outposts, and there's actually a little spot missing here, which is snow devices which, which come out soon, um you'll be able to run ECS anywhere on snow devices as well. Um but when you can't run it on aws hardware, that's where ECAS anywhere comes into play that's the easiest way to explain it.

And then EKS Distro is actually how this all started. When we first started going into this hybrid world with Kubernetes a couple of years ago, the first project we built was EKS Distro because we said, hey, it'd be nice if all these EKS products shared the same underlying binaries. So we didn't have to do these builds 100 times. And oh, we actually put a couple of cool patches on top of the open source because EKS is 100% upstream open source. It's no fork, but we do have some patches. We run, you know, 10 to 15 patches, security issues, some operational changes we make it might be some versioning, we might want to pull some performance feature from ETCD down or something. And so we bumped the version of ETCD, things like that. So we have 15 or 20 patches that we run at any given time across the, you know, 10 or 15 different builds that make up kind of the core Kubernetes. And that's what EKS Distro is also an open source project. You can download it, you can use those builds and install Kubernetes with KOPS or your favorite Kubernetes installer. Um it works with all of our partners Rancher and you know, and obviously it's the same builds, we're running under ECS anywhere and under all of uh the other ECAS products. Ok? So that's what ECAS Distro is.

So that's the full story. Those are all the today, those are all the Kubernetes products that we have and we think we've really checked the box for most places you're gonna want to run Kubernetes. Um if you guys think we missed one after the talk, I would love to hear where you want to run Kubernetes that we are not there yet. Um that would be a good thing to hear.

So, ok, so what are the, i kind of touched on this earlier, but I wanted to make sure i hammered this home. There's, there's really two ways to run Kubernetes out of that whole portfolio in your data center or at the edge.

Um and the third one coming soon, which is the snow devices. Um but the, the two main ones are Outpost and ETS Anywhere and Outpost again is, is our infrastructure. This is when you don't need to amortize infrastructure that you've already bought and you're, you're looking to buy new infrastructure. Outpost is a really good option.

Um the TOC is actually pretty strong, it's a pretty strong case route posts there. And it's, it's a really great option if, if you want to run Kubernetes in the most consistent way possible with the region. And that's exactly the same. And like i mentioned, it needs to be fully connected, but we do support up to seven days of disconnect, but it does eventually need to call back home and it's fully AWS managed so you don't manage the infrastructure at all. We manage the control plane for you.

Um you just attach your worker nodes same model as the cloud. But if you want to use existing infrastructure, that's where EKS Anywhere comes into play. And it's important to know that you as the customer are actually managing operating the cluster, it's completely free to use. You can just pull it down from github right now and start using ESS anywhere if you want support, that's where we charge. And also we do do some builds around what we call curated packages.

So like anything with Kubernetes, it's not ready to go out of the box and you need things like monitoring and logging or a container registry or, you know, maybe an ingress controller and you as the customer would have to go out and into the CNCF ecosystem and start figuring out. Ok. What's the best one? Should i get Traefik or Kong or this or that Nginx? And we kind of made those decisions, help make those decisions for you. It's optional. You don't have to use our curated package, but we do provide a build for an ingress controller that you can run on EKS anywhere.

So if you want those builds and you want support from us, that's where you pay us. So that's how we charge for EKS anywhere. Otherwise you can download it off github, put it in production like a lot of customers do and support it and run it yourself just like any other open source projects, apache license. So I encourage you to check out the github. Also, if you do end up doing that, our engineers are really good about responding to issues. So if you run into any problems, just open a github issue and we can check it out.

Cool. Um so where are we seeing the most pickup? I mentioned some of these earlier manufacturing being a really big one for us. Um you know, customers want to run spark models, they want to run, you know, data analytics on prem they want to take advantage of all the things that Kubernetes can offer you around, you know, portability of applications between their factories and their plants. So we're seeing a lot of pickup with EKS anywhere there

Financial services is probably our biggest one as of right now. Um you know, especially areas where there's a lot of regulation. We see that in southeast asia, we see it in europe, the telco industry, as i mentioned, i'm gonna go into that in the next slide.

Healthcare again, more because of these edge locations, there's a lot of applications, healthcare run in the hospitals themselves and they want to be able to run Kubernetes there to be able to do fast deployments and updates of those applications on on the cluster.

Um and then the last one would be gaming, right? Those are, those are really the big sectors we see right now, but we're not necessarily limited to these sectors. You know, i mean, if you have workloads that run anywhere and you want to run them on c ETS anywhere, it can help, help do that.

So let's talk about five g. This is one that we've really invested heavily in. So if you have workloads or you're helping the five g uh industry, this is where EKS anywhere can really help.

Um we have been working with some of the major network function, you know, ISVs who are helping the end customers install these new five g networks. And it's a very complicated space. I don't know if any of you have ever even started to look into this telco space, but it's, it's a whole new world, a whole new domain and that's where we've really been diving in and trying to help customers get these network functions running on top of Kubernetes and get them certified with EKS anywhere.

And so we've, we've, i can't name any customers yet, but we're, you know, we're starting to do deployments with some really big five g folks and we're really excited about working there.

I do want to mention our partners also. Um we have a whole bunch of partners but three, i want to call out one being Weaveworks, we're calling out, We'veworks because we work closely with them, especially around GitOps. So GitOps is our kind of de facto way of putting stuff on to Kubernetes clusters across our whole portfolio

Obviously, you can use any kind of deployment methodology you want. But we really encourage people to kind of move towards GitOps. So we bundle a project called Flux, Flux two actually, um, by default onto EKS Anywhere.

So what that means is you can create a EKS Anywhere config file saying ok, I want my cluster to look like this. I want it to run on top of vSphere. You know, I want this many nodes. I want all these configurations, you can check that into a GitHub repository and something will be listening for that and say, oh I got a new, a new check in here and then that state will get updated on the Kubernetes cluster and some controller will handle rolling out that new cluster, right?

So that's that's the dream that we want versus going into a CLI, start typing a bunch of commands and oh finally get your cluster up and running only have it go down for some reason two days later and then not remember all those commands you typed, right? So that's, that's what we're trying to avoid.

And we're really trying to push people to defining clusters and workloads and really any interaction with Kubernetes through some kind of state configuration that then can be reacted upon, right? And that's where we get GitOps comes into play.

And we've worked with, we've worked for years now. Um, they've been a close partner of ours, ours for not only EKS Anywhere but also EKS in the cloud as well.

Uh the second one being Isovalent. Isovalent is an ISV that makes Cilium. Um, and Cilium is our on prem CNI as many of you probably know. Kubernetes doesn't really come with a default network overlay. It, it does, but it's not, it's not production ready. Um and so you, you have to pick one, there's a bunch out there, the one we picked was uh Cilium and we're really happy with that choice.

They've been a close partner of ours and they give us a version of Cilium and actually help us support Cilium for you behind the scenes. So if you open a ticket with us and it happens to be a Cilium issue, our engineers will check it out if they can't figure it out. Eventually, we're going to hand it over to the Isovalent engineers and then they'll, they'll actually help us and that's all happening behind the scenes. You don't even know about it.

Um so that's, that's a great partnership that we've had with, with Isovalent and the third one is Nutanix.

Um at least, you know, go into this more later, but the underlying architecture of EKS Anywhere is actually Cluster API and she'll talk about this in a second and there's a whole bunch of providers they're called and essentially what that means is we can run EKS Anywhere on different platforms in your data center, you can run it straight to bare metal, you can run it on vSphere, you can run it on uh CloudStack and you also now can run it on Nutanix.

That's coming, coming out GA very shortly and I think it's in private beta, Lucian will go into some of this. So those are our three big partners that we launched with. And I kind of want to give them a little prominence because I spend a lot of time on the phone with them all, but that's not, we're not limited to them.

We have a whole bunch of people. We launched EKS Anywhere with all of these companies have certified their applications around on top of EKS Anywhere and these are all great partners of ours. Um and there's more as well. We probably are forgetting some, I mean, it's honestly, every month there's a new ISV from the CNCF that has certified their stuff on top of ECAS Anywhere. And sometimes we don't even know what happens. They'll do a press release and we'll call them. Oh, it works on ECAS Anywhere. So, uh that's great. So, and you and you, these are all those partners.

Um and I think there are more, yeah, and more so, a lot of partners on top of EKS anyway. So that's the high level overview. Um I hope I gave you some understanding of where this fits into the EKS portfolio. That was my main goal here and kind of what use cases we're trying to solve with it and then I'm gonna hand it off to Le Tutu who's gonna actually tell you what the product does?

So, thank you Chandler. So we're gonna take a deeper look into EKS Anywhere. First may I see a show of hands? How many of you have heard about EKS Anywhere and know what it is? Thank you. That makes me really happy. Well, for those of you who don't know it yet, um hopefully this diagram would help.

It all starts with EKS Distro. I guess Distro is the Amazon distribution of Kubernetes. It is not a fork, it is the upstream Kubernetes. We just added more critical components such as kube-proxy, coreDNS and a couple of other things, but we also make sure that we apply the security patches to the Kubernetes upstream Kubernetes and release it as EKS Distro.

That actually is the very first thing. Um we would upgrade whenever we release a new Kubernetes version, we upgraded the Distro first and then we go on to release a new Kubernetes versions for the EKS in the cloud as well as EKS Anywhere for on premise.

So both um actually all of the EKS products in the previous portfolio spectrum you saw all based on EKS Distro. This is the first place you get the consistency starting with the Kubernetes versions.

Now with EKS Anywhere. Um so the Distro is the core, it's just a distribution of Kubernetes. EKS Anywhere on the other hand, is the set of tools we build on top of EKS Distro. What are these tools?

So we give you the tooling to bootstrap a cluster. Um we write a lot to orchestrate cluster upgrade so that you guys don't have to figure out how to do those yourself. And we also bundle a list of the security package, additional software that I will show you the details later. Those are going to help you run a production Kubernetes cluster.

All of that backed tools are EKS Anywhere you are already familiar with EKS in the cloud which is a managed service. Um you know, you don't have to worry about managing the Kubernetes control plan, everything runs inside AWS region or on AWS Outpost server.

EKS Anywhere provide very similar sets of capabilities compared to EKS in the cloud. We give you capabilities to do cluster life cycle management. Um we give you auto scaling, we give you worker node group support, we give you operating system.

The key difference are twofold. One is on the cloud. You are leveraging all those capabilities and assets by calling a RESTful APIs the web APIs versus on EKS Anywhere because we designed the product to help you, you know, run a Kubernetes cluster either on your existing hardware so that you can maximize the capital you already put in or for some other customer, you may even need to run it in a fully disconnected environment.

Therefore, all these equivalent capabilities are present to you in the form of disconnected tool sets that do not need to have dependency from AWS API or services. Um and that's how we enable, let's say, you know, air gap kind of cluster creation and subsequent data management.

Um on top of that, um we're going to also have a list of the software that basically serve as replacement of some of the AWS services that are not available on premise yet. And so this full sets of disconnected capabilities is EKS Anywhere.

Another big difference is um in terms of manageability. So the EKS cluster in the cloud Kubernetes control plane managed by AWS versus on premise, you as a customer do need to manage the control plane yourself and by manage, I don't mean that you need to take care of everything big and small.

Again, the tools we give you have the smart logic that we learn from running EKS clusters in the cloud. We bring that same sets of logic we just implemented in a disconnected manner, for example, to help you run um to help you upgrade the cluster successfully.

So you do need to plan however, when you want to do upgrade um or monitor the health of the cluster as the upgrade happen and you know, pause the upgrade if need be. So that's a key difference um and similarities between EKS Anywhere and EKS in the cloud.

So we launched EES Anywhere with the VMware option about a year ago. And since then, we have customer telling us that they have applications that need access to the bare metal server underneath for performance reasons. And so we have launched the EKS Anywhere for bare metal machines um in June this year.

After that, we do have other customer who tell us, you know, they still want to run the worker nodes in. Well, they still want to run the Kubernetes nodes um in VMs, but they are looking for a virtualization technology that is free. So we have um enable EKS Anywhere on Apache CloudStack in October.

And then there are two more deployment options that we're currently working on. Both of them are actually in preview stage. Um one is to put ES Anywhere on AWS Snow devices. And again, that's um just like Outpost, it's another AWS supplied and supported hardware.

The difference between um Snow and Outpost is Outpost is meant for strongly connected use case versus Snow. You can put a machine in a fully disconnected air gap location. So EKS Anywhere will be able to run on Snow device next year. And with this option, you get end to end from, you know, software stack to hardware, end to end support directly from Amazon.

The last option is EKS Anywhere on Nutanix. Um we, we will be introducing this so that if you're already running Nutanix cloud infrastructure, you can also enable EKS Anywhere in that environment.

And as Chandler previously mentioned, um EKSR itself takes a dependency on the Cluster API project which is a sub project of Kubernetes. And the way that we integrate with these various infrastructure option is by integrating with the corresponding Cluster API provider.

So each one of these options, actually, there's a corresponding what we call CI providers for it. In terms of operating system right now, we have three operating system starting with Bottlerocket, which is a Amazon developed and supported operating system that is purposely built for running containers.

So with that option, you would again have a single shop for support. Um you can call Amazon for, you know, troubleshooting all the way to bug fixes support in the Bottlerocket.

We also support Red Hat Linux Enterprise Linux as well as Ubuntu. So with those two options we have in house testing to make sure whatever version we publish on our documentation, it definitely works with ES Anywhere, the key difference is um we will provide troubleshooting guidance but for official support to the level, for example of CVE patches, we do recommend you to seek support agreement with the corresponding vendor. That's a key difference between a first party operating system and a third party operating operating system that we support.

So from the beginning, we design EKS Anywhere to um run highly available Kubernetes clusters in your own on premise, data center on your own hardware. And it is a multi node uh cluster deployment.

But since then, we do hear quite a few customers, especially from the telecommunication space. You know, for the radio network access network use cases, they need to have a small footprint cluster as small as putting everything into one single um physical server.

So that's why in this month we're going to release the support for a single node bare metal cluster deployment option. We're putting control plane data plane CD, all these components all onto a single physical machine.

One thing to note is this option is still meant for compute intensive workloads. It means that um you still need to be prepared to have about 32 vCPU ish kind of compute capacity for this single known machine.

In terms of um deployment topology, so far we support two different topology. One is single, stand alone EKS Anywhere cluster, you can create as many of these single stand-alone cluster as you want. But every time you do cluster life cycle operation, meaning create upgrade, delete, you will need to stand up a temporary admin machine where you install and run the EKS CLI it's the same as the one you use in the cloud.

Um you would use that command line tool to create upgrade or delete a EKS Anywhere cluster. But if you run, let's say two or more EKS Anywhere cluster, then what we would recommend you is to use the second topology, which is to first deploy a management cluster that is just another EKS Anywhere cluster.

And uh however, it is long live, it is not like the temporary admin machine. So with this long live management cluster, um you can install, you know, your your fleet management kind of system such as like monitoring system or components that you need to auto scale the on your workload cluster.

Um but with this management cluster, you can then create subsequent trial cluster, what we call workload clusters. That's where you actually host your application. The most important thing about the management cluster is this is the dependency um that we have to enable you to automate all of the cluster life cycle management with your existing tool sets such as Terraform or um if you already use GitHub with your choice of the GitOps agent.

So, so this is the two topology that we already support today. And then lastly, this is um the sets of software that we do include. So with running Kubernetes cluster on prem you need more than just the basic cluster life cycle management toolchain.

So that's why we have bundled quite a bit of these software, especially if there is not a AWS service equivalent. For example, in the cloud, you probably would use um ECR as your container registry. You would use VPC CNI, you use a LB and NLB for load balancing. None of those service exist on prem definitely doesn't exist in a fully disconnected environment.

And so we have pre bundle some software to help you with that. Um for example, right now, we already have Harbor as a local registry for you. Ingress is the ingress controller. MetalLB for your application load balancing. Cert manager to manage certificates on the cluster.

And then we also bundle AWS Distro for OpenTelemetry. So if you have um you can instrument your application based on the OpenTelemetry format and you can pipe the monitoring data to your own self hosts Prometheus or pipe it to the managed version of it um for cluster cluster and application monitoring.

And then lastly, we do support cluster auto scaling. Um we give you the metric server and cluster autoscaler to help you do the node level scaling. And you can use that in conjunction with um HPA for pod scaling.

You will notice all of these are just open source software that you can grab yourself and download. So what's the big deal that we pre bundle these? There are at least two value.

Number one is if you're not as opinionated, you don't want to do all of the work to do research. Then here is a set of software we already curated. We will do the compatibility testing version compatibility testing for you. Every time when we have a new version of EKS Anywhere or when there's a new version of any of these software, we will do the testing to make sure the combination of the software we give you would always work. So that's number one value we take away the burden testing for you.

Number two is um under the EKS Anywhere Enterprise license, you can call AWS for support for any of the software we listed here all the way up to bug fix level support. So even though these are not necessarily a project developed by Amazon, we will have our developers contribute the code back to the upstream OSS project.

Um so that's a one stop shop support kind of value you can get if you choose to use the all of these software and they're released under the feature um EKS Anywhere curated packages on the right hand side, right, most hand side is the future areas that we're investigating whether we would need to add additional software if you guys have any.

As you try out EKS Anywhere, you have any needs about um particular new software, you want us to bundle, feel free to talk to us after this session. So that's everything on EKS Anywhere.

Um now we can take a look into EKS on Outpost. So again, Outpost is a also a Amazon supplied and supported hardware. The key difference is this uh hardware is meant for a strongly connectivity use cases compared to Snow is meant for disconnected use cases.

Outpost is just a rack that you can put into your own on premise data center, but you do need to make sure that it has strong connectivity back to AWS region. So a lot of the AWS services that are available in regions are also available on Outpost. It's meant to mimic almost like a mini data center that a mini AWS region that lives inside your data center.

EKS is available on Outpost. Previously, the way we implement the architecture is we actually split the cluster in two. The control plane lives in the cloud inside EKS uh inside AWS region and then worker nodes lives on the Outpost hardware in your own data center.

The problem with that is if you ever experience a temporary network disconnect um for up to five minutes. For example, Kubernetes architecture itself has a limitation where um if the control plane node doesn't see a heartbeat from the worker nodes for more than five minutes, it will start to think the worker node is unhealthy and it would mark that as so in its own state.

And uh however, your worker notes could be, you know, your application could be happy running on prem without any problem. But now if the network disconnect resume because the controlling already marked the worker not as unhealthy, it will start to recycle them and therefore causing application downtime so, so that has been a big problem um for our customers who are trying to use EKS on Outpost.

And therefore, in September, we have launched a feature that we're calling uh EKS local cluster on Outpost. That's uh if you go to, when you have a Outpost server, when you go to the EKS console to create cluster on the Outpost server, you will now start to see these two different options.

So the local cluster is designed to mitigate the network disconnect problem. We basically has now a architecture that kind of cache the state of your control plane locally on Outpost. So that um if you have a disconnect up to seven days, then all of the information are still there locally on Outpost.

And once the network resume, we would sync all the info back to the actual control plane in the cloud. And that's how we would be able to support a continuous availability for your application for up to seven days on Outpost.

Um now, with that, I think we are concluding our session today. Feel free to talk to Chandler or me after the session. We'll be around here. Thank you very much. Thank you.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值