Migrating to AWS Graviton with AWS container services

Good afternoon, everyone.

How many of you are here today?

Uh first of all, how, how's your re invent been so far? Good. All right.

How many of you are here today? Because you want to save money running containers on AWS.

And how many of you are here today? Because you want to run your containers faster on AWS.

And how many of you are here today? Because you want to run your containers more sustainably on AWS. You have sustainability goals.

And how many of you are here today? Because you want to hear about how other customers like Samsung have succeeded in migrating their containers to Graviton.

Well, that's really great to hear.

My name is Michael Fisher. I'm here with uh my colleague J Young Kim and Hyun Kim from Samsung Electronics to talk to you today about how to migrate to AWS Graviton with containers, AWS container services. This is session CMP404.

If you're here for these reasons, you're in the right place. Welcome.

All right. So let's just uh start with a brief overview. Why with Graviton?

Well, our customers uh are migrating to AWS Graviton for a number of reasons. But to start, let's talk a little bit about what AWS Graviton is first.

AWS Graviton is custom AWS silicon. It uses 64 bit ARM processor cores in building AWS Graviton. We take a look at a broad selection of the kind of workloads that customers run on AWS. And we looked at all those and we tailored Graviton to meet the kind of workloads, a broad spectrum of workloads that our customers run for real on AWS every single day, whether they be web services, databases, caches, API services, business logic processors, all of those are are are considered and targeted when we build Graviton for our customers.

And all of this is around uh Graviton gives us the ability to rapidly innovate and iterate and build for our customers. At uh when we build Graviton, we're actually doing that with in-house AWS resources, talent and expertise. We have the ability to iterate very quickly, instrument and then produce Graviton processors that are built for a broad spectrum of customer workloads.

So as a result of this, we get three major benefits for Graviton.

Number one, Graviton has the best price performance in Amazon EC2 for a broad array of workloads such as the ones I just talked about.

In addition, Graviton costs up to 20% less per instance hour than comparable EC2 instances, same number of CPU cores or same number of CPUs, same amount of memory and the same enhanced features such as uh instant, instant storage optimized and network optimized instances.

And just to add I on on the cake, Graviton uses up to 60% less energy than comparable EC2 instances. This helps not only Amazon meet its sustainability goals, but it also helps you uh succeed in your sustainability goals as well.

Now, containers and Graviton go hand in hand, they're a great fit for a number of different reasons.

Number one, Graviton is supported by all major container orchestrators and runtimes. These are architecture neutral container runtimes. Once you attach nodes to a cluster, those nodes register their CPU architecture to the cluster and the orchestrator can then make intelligent decisions about where to place containers based on your preferences.

The image tooling is extremely mature and supports not only the ARM64 CPU architecture, but the ability to build multi architecture images as well.

The container runtimes including Docker Engine and ContainerD all support multi architecture containers and automatically pull down the right image for the architecture that that your image is intended to be run on, the containers intended to be run on.

Migrating to Graviton is not an all or nothing proposition. You can your journey to Graviton uh can start with by identifying a workload that would be a good fit for Graviton. Introduce it to a lower environment such as dev or test and then began to do things like uh integration testing and unit testing against it, performance testing and so forth. And when you've identified those workloads that are good candidates for Graviton, you can then bring those to production.

And once you've brought, brought your first Graviton workload to production, you can then begin looking at other potential workloads to migrate to Graviton and then move those too all while still keeping x86 instances for those workloads that still need them.

In addition, all major observable agents and providers have support for Graviton. You can instrument the same kinds of gauges, counters, et cetera that indicate the performance of your workload and how many, how many resources it's taking with no loss in fidelity at all.

So let's get started talking about actually using containers on AWS Graviton.

We talk about containers on AWS Graviton. One of the things I like to tell customers is that they can run containers their way all the way from having full control of every node that's added to the cluster, to having a fully managed experience using AWS managed computes such as AWS Fargate.

If you want to select the specific instances that you use to run your workload, you can do that. So whether or not you want to run burst instances such as T4g instances or you want to use specialized instances such as instance, such as storage optimized instances or network optimized instances or even instances with NVIDIA GPUs, such as our G5g instances, you can use any of those and mix and match them either with other Graviton based instances or x86 instances. As you see, fit.

On the other hand, if you don't want to actually manage your own compute, you don't want to do things like manage AMIs, manage patches, um handle things like uh vacating tasks or pods away from hardware that's about to go under maintenance. You can use AWS Fargate instead and run your tasks there using Amazon ECS.

If you choose to use Fargate, um the uh the, the respon your responsibility shifts to the right and AWS takes on more of that responsibility. So for example, everything up to and including the container runtime is managed by AWS. All you have to worry about as a customer is the integrity and performance of your container image itself.

Finally, AWS Lambda supports container images and can run on uh Fargate or sorry on Graviton as well. All you have to do is choose to run your Lambda function on the AMD or sorry on the ARM64 architecture, choose the amount of memory to allocate to your function and then just run it. And all you have to do is pay for function invocation, run time.

And of course with Lambda just as with Fargate and EC2 running your functions on AWS Lambda as containers costs less with Graviton than it does on x86.

We have a broad variety of Graviton instances for every workload, whether they be general purpose compute, optimized memory optimized, storage optimized or accelerated with GPUs. We have those available and we now have those in.

Um well, we went from uh Graviton A, A1 instances which uh we don't talk about too much, but we're now on our fourth generation of Graviton. Uh how many of you heard about Graviton four this week? We're super excited about Graviton four. We couldn't get it onto the table in time because it hadn't been announced yet. But we're very excited about Graviton four being 30 up to 30% faster even than Graviton three.

Now running containers on Graviton really is as easy as 123.

Step one, build your container image. And that can either be Graviton specific or it can be a multi architecture image.

Step two provision, the compute figure out what compute you want to run your containers on attach it to your cluster via for example ECS capacity provider or with cluster autoscaler or managed node group with EKS and then deploy your application as usual other than potentially the image and maybe not even then that's really all you need to do.

And in fact, it's even easier today, you can start with your container image, deploy your application and now you no longer need to provision, compute in advance the orchestrator can bring it to you. Either you can run serverless uh or a fully managed compute like like with Fargate or you can uh take advantage of the new functionality in ECS such as EC2 capacity providers. Or if you're running EKS, you can use cluster auto scaler or even better Carpenter to arrange for the the capacity to be delivered and brought online just in time.

So the orchestrators are able to look at pending pods or tasks and then automatically start and attach the right compute to your cluster automatically.

So let's start with part one which is building container images, building container images natively on Graviton is exactly identical to building container images natively on x86 you can build your container image using Graviton based EC2 instances. You can use AWS CodeBuild which offers ARM64 based or Graviton based environments as runners or if you have a Mac with Apple silicon, you can build your container image right on that on your Mac with Apple silicon, push it to your registry and then be able to use that on Graviton just as you would with an x86 based image.

Alternatively, if you don't have access to a either a Graviton builder or an ARM64 based uh laptop or desktop environment, you can perform an emulated build instead with an emulated build. You build your image on an x86 host, but you produce an image that is built for the ARM64 compute architecture. This uses BuildKit, which is otherwise known as Docker buildx. And in order to mark its magic buildx uses a software based CPU emulator called QEMU or keyu to do this.

First, you start with docker build, you create a builder by executing docker buildx create. And then you run docker buildx build. And the first argument to docker buildx build is the --platform argument. And here you pass the operating system, architecture and architecture variant argument. So here it's linux/arm64/v8. You tag the image with your repository and tag that you want and then you add --push to push it to your register.

Now let's now talk about building images for multi architecture support. This gives you the most flexibility in terms of deployment. Building a multi architecture image allows you to deploy your containers on either x86 or Graviton based instances. However, you see fit.

Now uh I actually asked Docker um about public Docker Hub images. Did you know that over 60% of the top 1000 images for Docker uh that are on Docker Hub are already multi architecture supported. That's very similar. We have very similar data for public ECR your ECR public as well.

So when you're actually declaring a base image in your Dockerfile, for example, from, let's say nginx:stable, you're already pulling a multi architecture capable image and that's for, for pretty much any base OS image, uh popular base OS image that there is. So by doing so you're doing exactly what these publishers are doing the procedure.

The overall procedure is fairly straightforward in parallel you build natively in uh in this, in the build stage, those parallel actions are to build for x86 and to build for arm64 and then push each of those images to your container registry. Once those architecture specific images have been built, then you produce a multi architecture manifest or index and you push that to the registry as well. It's the index that you actually refer to in your task definition or your pod spec. But the container runtime will automatically pull the correct bits for that correspond to your to the CPU architecture.

You can actually do this in 1 step with docker buildx or buildkit as before you create a builder uh instance using docker buildx create and then you run docker buildx build and you pass it not just one platform but a list of platforms. In this case. In this example, we're using both the linux/arm64/v8 platform and the linux/amd64 platform tag it as you would any other image with the registry and tag and push it. And then what will happen is buildkit will build these two architecture specific images in parallel on your local build workstation or your build environment and it will use native build for whatever the native um CPU architecture of your build instances. It, we'll use software emulation for the other one and it'll push it in one step.

Now, I don't necessarily recommend doing this for production CI/CD workloads. Uh because software emulation is much, much slower than native than building images natively up to 20 times slower. So if you're going to build an image and use emulation, you probably only want to do it. If you're all, you're doing it in your Dockerfile is copying files from your build environment, uh your local build environment into the container that is building. If you're doing things like building, I don't know Chromium from scratch, you probably don't want to do that with emulation because it could take days. Uh instead you really want to do any sort of any container builds that involve lots of compilation natively instead of in an emulated way to build multi architecture images natively.

These are the commands you would run on both the arm64 and x86 instances. You would run docker build, give it the tag and then push to the registry and you do that in parallel.

Now, you'll note here that the tags that i that i i'm supplying in these examples include an architecture suffix

"And that's because the index builder in the third step, which is docker manifest create, needs names of architecture specific image tags in order to build the index or multi architecture manifest that stitches those together.

So in the third step, you're running docker manifest, create. And the first argument you're giving is the architecture neutral tag and registry. And then every argument after that is the architecture specific tag.

So you're seeing here, docker manifest, create my app colon test and then the two images that are being added to that are test arm 64 and six test x 86. And then after that, you're running docker manifest push.

So what that looks like from a diagram perspective is basically a hierarchy or a tree where the multi architecture manifest or image on the left points to the manifests of the image for the arm 64 and x 86 cpu architectures which in turn point to the image layer blobs that are actually stored in the registry itself. All major container registries support this. This is part of the open container image specification.

So ec ec r public docker hub artifact um all of these support multi architecture images. This here is a general overview of the process of building that i described earlier in your c i cd pipeline. We have multiple stages, we have three stages in this particular example in the first stage which is triggered by a push to your code repository.

One action of this stage is to build on x 86 using an x 86 native runner. The other is to build on for for the arm 64 architecture using a graviton native runner. Then in parallel those images are tested and then a multi architecture manifest is built in the succeeding stage. And then finally in the deploy stage, a deployment to a lower environment is invoked.

Now, of course, you don't want to stop there. You want to then test and deploy potentially using gated stages to deploy to successively higher environments until you're satisfied that the multi architecture image is ready to be deployed in a production. In which case, you can add additional stages to perform those deployments as well.

There's broad dev ops ecosystem support both for managing the c id pipeline and the actual runners. If you want a fully managed c id experience, you can use aws code pipeline with aws code built. You can use circle c ic c i or travis c i. Those all of these supply native runners that for the arm 64 architecture, if you prefer a hybrid approach where this, where the actual execution engine is hosted, but the runners are self managed, you can use github glab or build kite. And then finally, if you want a fully self managed experience, you can use jenkins and similar solutions as well.

Now with that, i will turn to my colleague j yong kim to talk about how to use amazon ecs and eks to run your graviton containers. Thanks jy.

Yeah, awesome. Um thanks michael for uh insightful presentation on the advantages of graviton and process of building graviton container images. So once uh um all container images and marar manifesto has been built, it's time to deploy these container images into your container orchestrator and all major runtime and container orchestration tool can work seamlessly with aws graviton and this includes ecs, eks and docos zone as well.

So let's talk about using graviton with amazon es first with ecs. Um the control plane is architecture independent and therefore both x 86 and graviton computing resources can join a cluster. If you are running your containers on fargate, which is a civil list container computing engine, then using gray to becomes more easy, you can make it this by configuring long time platforms, cpu architecture to rcc four in your ecs task definition.

Um aws, we highly recommend using fargate because fargate will release is operational overhead by shifting um differentiated operational task to aws and makes it easy to use graviton with less additional configuration in ee two. You can schedule your easiest task on graviton instance via placement, constraint and capacity provider strategies and task placement constraint is a rule for selecting ec2 instance which are capable of running specific ecs task and capacity provider provides computing resources.

So for example, graviton instance can be configured uh in the capacity provider and easiest task which is intended for routing to the graviton can be sitting on that ec2 instance or autos sc group. So um if you are running your easiest task on a fargate, all you have to do is a set requires compatibility to fargate and runtime platforms, architecture to arm stol. There is no need to task placement, constraint or capacity provider as well.

So the ease of um setting of fargate with graviton lies in this simplicity. So builders can only concentrate on building their container images um compatible with um architecture.

So let's take a look at here ecs task definition for ec2 with placement constraint. So since um containerized applications are most likely architecture dependent, so we made it a configuration that explicitly brought easiest task to the graviton instance. And in here we set the placement constraint with member of type that place ecs task to the instance which satisfy this expression.

So with this, your easiest task uh will be provisioned on the top of graviton instance to sum it up by preparing your your graviton instance with ecs capacity provider and deploying ecs task with this placement constraint, then your ecs task will be uh provisioned on graviton instance.

So i'm gonna show you the demo. So as you can see in here before we jump into the migrating to the graviton fargate, let's briefly outlined uh what it count the status as you can see there is the easiest service, which name is node jsmt arc. So we are going to click this one for taking a look detail since this ecs service is sitting behind the load balancer.

So i will connect via um load balancer dns end point. So click the uh diana's name and it shows no j web application and click the info tab. Then you can see this ecs service runtime information in detail. As you can see um this ecs service architecture is access for utilizing intel zone processor.

And as i've said before to migrating this ecs service to the um graviton bar gate, uh we need to change ecs task definitions configuration. Currently, this ecs service leverages first version of task definition. Uh which name is no js multi art task to make the necessary change, navigate to the task definition console and select the relevant task definition and click the create new revision button leaving other settings as they are.

But we are going to change the ecs task operating system from x 86 to arm 64. In this demo, i have built the container images with multi art manifest. So uh this manifest uh act as a router that direct container orchestration tool to the appropriate container image for either arm 64 or md m 64 architecture uh during the deployment stage.

So there is no need to change um these containers image ur i um because this is referring manifest ur i. So finishing all the configuration, let's create new ecs tas ain head back to the es cluster and then select the ecs service and proceed with the update. Uh we are going to change revision number from 1 to 3 which i just made and then carry out the update and we can check ecs task deployment status in this task tab.

But also you can capture a relevant event uh event in the the deployment tab as well. So as you can see in here through a lowing deployment, our new ecs task will be sitting on the graviton target. So after all the deployment is completed, then we are going back to the existing endpoint and then refresh the web page.

So as you can see in here, this new ecs task architecture is arm 64. What that means this web application is sitting behind the graviton of bar gate.

All right, then let's talk about using uh ekr a with ekas control plane is also architecture independent. So both x 86 and graviton node can join a cluster. If you want to write a specific part uh to the graviton instance, uh then you can easily do that via a node selector or node affinity, which i'll show you on the later slide.

And the carpenter uh which is uh one of the aws open source project that provides efficient and scalable way to manage coordinate cluster autos scaling and um can provision not just in time. Carpenter is architecture aware and immediately provision um gravity node that closely fit with parts if additional node is necessary in your eks cluster.

So unlike um cluster autos scalar carpenter, uh take advantages of aws easy to play technology in order to launch container quickly and really parallel concurrently and often in just the 32nd or less. And of course cluster autos scalar is also visible scaling graviton not grow.

So if all node grove in your eks cluster are graviton, then you don't have to specify node selector or node affinity. But if you build your container workload with mt arc for the flexibility, then you may need one of them in the spec yamo file or deployment uh spec temple.

So as you can see in here, um node selector is quite simple but little bit uh historical and less prepared compared with node affinity because it only support equity based label queries. So by node affinity uh which is an expansion of node selector, then you can vote a path to a graviton instance with advanced queries and flexibility.

And this no definitive rule means that this part will be scheduled uh on the ec2 instance that architecture has arm 64. So having configured your path to be scheduled on graviton instance, then it is crucial to designate um your node um graviton node with um capacity provider such as carpenter or cluster autos scalar and carpenter simplifies autos skiing process uh with abstracting away underlying complexity and carpenter um provides straightway uh forward away. Um straightforward way for users to define and update their auto auto scale in policy um by using not po just for your information.

Um this have been called as a provisioner until recently. So uh when you take a look into the node for yamo file in the screen, then we will see the requirement uh that constrain the node to be provisioned. And these requirements are combined with um node affinities rules uh as seen in the previous slide.

So as you can see in here, um we are going to provision both arm 64 and a md 64 container instance um with this requirement for the flexibility. But if you only want to provision graviton instance in this uh eks cluster, then you can specify only arm 64 rather than using both and then taking a look at the second um requirement.

As you can see, it requires its generation uh greater than four. So with this, we can expect carpenter will provision or preparing the instance uh which is generation is greater than four. And um the last thing but not least carpenter always uh figuring out the lowest price of nodes uh that uh closely fit with parts requirements.

So uh by using carpenter, uh you can achieve both um operational excellence and the compute coastal optimization as well. So let's take a look at the uh eks graviton demo with uh carpenter.

So in here, i will prepare two yamo file one is for carpenter notebook and the other one is deployment yamo file that we are going to use for deploying the containerized application. And additionally in the terminal window below, i've got the ecas node viewer running.

So um this tool provides the visibility for dynamic node usage in your ecas cluster. So click the note pull yamo file and then scrolling down a little bit. Then you can see the requirement section in here in here. Uh we can observe both arm 64 and a md 64 architecture is included.

So with this carpenter will examine um the node affinity or node selector of each new part to see whether there is a specific uh cpu architectures requirement. And then you can see the second requirement that it requires um with its um instance generation greater than four.

So therefore carpenter will provision graviton instance with its um instance generation greater than four. If one, it is one is needed to deploy and feeding the pending status part and then turning to the deployment yamo file for the containerized application, then it said um explicitly that the um this part will be placed on the arm 64 container instance and note that this container image both support access to four and graviton.

So we are going to starting by uh creating the carpenter notebook and then we are going to deploy the deployment yamo file which has two paths in there. And also we are going to use get uh cg part for checking the two parts current status. As you can see two part status is pending and you can observe uh same thing in the ecs node be below.

And we are going to use q cole with watch flag for observing the um parts um creation progress in a wheel type. So after a brief wait, then it provision or four parts into the instance which includes aws node and q proxy and also two parts which was mentioned in the sample app, yamo file.

So as you can see in here, um the graviton instance provisioned with um its um generation more than four. So i want to give you um at a glance um of the summary what i have recovered.

So to harness container workload, um first, uh you have to um specify relevant details in the container deployment unit configuration. So in ecs, this will be the ecs task definition and ekas this will be um part yamo file or deployment yamo file and then um specify graviton instance into the capacity provider.

Of course, when you are using ecs far gate, then you don't have to specify capacity providers configuration. But in terms of ecs ec2, you have to prepare um graviton instance in ecs capacity provider and in eks, this will be the carpenter or cluster autocar"

So through these steps, your containerized application will be sitting on the Graviton um computing resources. So with that, I'd like to hand it over my stage with Hunter Kim to talk about Samsung Electronics.

Um, tb plus, uh, a wri to journey.

Hello, everyone. I'm, uh, Hunter who work in Samsung Electronics, uh, sober space team as a dev of engineer. And I'm very exciting to stand here to meet everyone. Uh, today, I'd like to talk about how to use a grafton on AWS EKS, uh, with my story in Samsung TV Plus, sometimes service business team have uh provide various kind of service such as garage store gaming hall, use t plus free and so on.

Recently, we launched a mobile cloud platform in gaming hub. Samsung gaming hub provide the instant place which enables game publishers to direct the users instantly into their mobile games by passing the app store in store process. I'd like to talk about uh about uh Samsung TV Plus because of my journey beginning this service, every Samsung mobile point users can see the free tv through uh uh Samsung tv plus. Uh some d plus provide hundreds of channels uh such as CBS News, Paramount movie Golf Pass and a lot of kids channels. Also you can get a personalized recommendation. It is available in 24 countries in tv device and available in 11 countries in a mobile device. And the mobile server got thousands of dps before we migrated to grafton, we used 100 of c five instances. Uh and our application use spring on java eight and we use eks version 1.22 and we use uh github actions for c i cd.

So i have to talk about uh let me tell you why i adopted the zebra graviton first. Uh cost optimization. Definitely we can reduce the issue to cost. Uh just changing the instance to graviton. Second is price performance. Gravitas performance is graceful. Uh i will talk about this later. Uh third, we can reduce the carbon emissions because said that uh graviton is a road power server so they can reduce the carbon emissions. So uh we can participate in esg easily and we can keep pace with our technology. Uh i think uh using our late latest generation of instance is very important because uh it power our server and service.

This is a migration plan uh when it comes to aws managing the service such as e mri ds opensearch ela elastic and so on. it is very easy to uh migrate graviton because you only need to click orbit times, you know a w console. However, in case of issue two, we need evaluation planning and testing to migrate to ac pu because grab to use ac pu. Uh my journey to graviton have this three stage in a validation stage. We will uh check whether our installed software in two is comfortable with uh mc pu or not. And then we will build a multi platform d image. Finally, we can create the gravity instance and deploy it, deploy our application on there. I will explain more details in the next page.

First, we have to check our software is comfortable with uh com uh is comfortable with uh arm architecture or not. No change is required for software installed with the native packages manager such as y and a pt. No change is required for our java application code. However, sometimes we have to take care to install a software into ec2 for instance, a wcr a version two, there are two kind of version. Uh uh one thing is for x 86 the other is for m 64. So you have to choose one according to the your cpu architecture.

Second with the d build x and the cmu, we can build the b platform d images easily and together, it means that we can get a dark image for x 86 and m 64 at once. And this quote is sample of uh github, our github action build script. More, more many examples are available on the internet. It is very helpful.

So when you prepare darker images, then we can create the graviton instances, our a ab route the traffic to uh x 86 instances which is legacy. So i rolled out the weight to a grape to, to give more traffic. Uh so finally, we use only the graviton instances and we use cluster autos scalar for our scalability. And i have a plan to use a carpenter rather than clusters scalar.

Unfortunately, uh we have some problem when i uh deploy our application to the graviton. I summarized my troubleshooting in, in this page. First part was crashed sometimes relieving these java arrow robots. Let me tell you more details. Part got stopped in 100% and during, during one hour and then they restarted by itself and then they serviced normally. So uh this uh java error rows looks like uh jvem box. So we update the java version to our latest version gradually. It is very helpful. However, the the starting problem is not terminated. Clearly, vocational path you started uh once or twice per week still.

So i tuned uh about runtime options. This uh use container so forth. This is uh java options. Uh when we use these, these java options, uh java aware the containers, cpu and memory. However, these options have some compatibility issue with our environment. So i remove these options and i update the de all of the dependencies related with our application such as spring dependencies and data do agent. Finally, we can uh resolve this list starting problem.

And uh there is some uh b report in uh java about uh uh arm architecture in the java official website. So i recommend you uh keep late latest uh java version. And i think java uh 17 is better than java eight to use uh arm architecture such as uh graviton,

I got impressive research from a graph to instance. First, i uh we got improved, improved a p server response latency like this. This is our data uh metrics. Ah we it is very impressive because we didn't expect this uh improvement of a ps server response.

Second is absolutely we can uh reduce the issue to cost than before us. West to reason. C seven g two x ra is less expensive than c 5.2 x ra. And this uh saving percentage is different uh depending on uh reason and the the class.

Now i am uh changing the instance in the u a system to graviton econ system is mr based engine for our recommendation uh for our content. Uh we system provide the various kind of uh api uh such as curation, personalization and recommendation to samsung gaming hub and so on. And euro system is consist of api instance and mr instances. So i focus on api instances and uh uh ya system u application use no dot js rather than java. And i never got any problems so far so i can visit las vegas and stand here uh without troubleshooting.

Uh, and then recently samsung renamed the game launcher to us gaming hub uh in release of uh one u i 6.0 ok. Uh so my story is ended in here. Thanks for listening. Thank you. Thank you very much ch

So as you can see uh graviton for many of our customers including Samsung Electronics is a straightforward migration that frequently involves no changes in code whatsoever. Just building multi architecture or gravitons specific images, deploying testing and occasionally finding some issues that may just require an update to your runtime, an update to a library or something that just may need a minor configuration adjustment. This is the experience of many customers who migrate to graviton every day who i speak to.

Now again, if you're running a byte compiled application like java or.net core or application written in a pure scripting language like no js, which is the most common language for containerized applications running on aws or python ruby php. All of those run and are available have available runtime for graviton that are optimized today on this slide.

I provide a few qr codes that you're welcome to scan with your camera to that take you through some blogs and how to articles that you can explore to help you dive deeper into how to migrate and build containers and for graviton and deploy them, i'll leave this up for just a moment for you to take that.

And finally, uh we'd like to offer you the opportunity to con continue your compute learning with uh m with aws. There's an importance in investing in developing ourselves, our teams and our organizations with the knowledge needed to renew and grow. All of us. Our customers want a way to gain and externally demonstrate their aws compute knowledge and skills organizations are seeking skilled people like yourself and they want to understand the skills of their current employees or potential new employees.

So, with that, we're offering a compute knowledge, digital badge and a learning path via aws skill builder. There are learning paths to help you build knowledge in areas such as compute and the ramp up guide includes details of the training uh that helps you build the knowledge and other suggested resources. And then you can take an assessment, earn a compute badge and display it uh wherever you hope uh employers or your peers will find them.

So uh feel free to take a photo of that to take you to the uh compute skill builder uh website.

Finally, i'd like to thank you all for attending. You've been a great audience. It's been wonderful to talk to you. I'd like to give special thanks to my colleague, uh jy kim, ju yung kim and hyun cho kim from samsung electronics and all. If you want to speak to us about any questions you might have about migrating to graviton running containers on graviton. I'll be out in the hallway if you'd like to speak uh one on one.

And finally, uh please, every uh we we thrive on your feedback. So if there's anything you liked, anything you didn't like about this section, this session today, uh please let us know uh and uh we read your feedback eagerly. Thank you. Very, very much for, for joining us.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值