New era of IaC: Effective Kubernetes management with cdk8s

Good morning and welcome to Raven 2023. My name is Victor. I am Developer Advocate at AWS. And before I joined to AWS, I worked as develops engineer more than 10 years and five of them, I deployed Cerna clusters and said Cerna clusters mean like a lot of Cerna cluster, more than 100.

But all the time, I got a question from development team, how I MS front end developer can use TypeScript for example, or the end developer Golan or Java can configure and manage deployment object inside in Kernis and today together with Mike.

Hi, good morning, everyone. My name is Mike Gold. I'm Solutions Architect at Amazon Web Services. But before I changed multiple roles in different teams in IT. And indeed, about 4.5 years ago, I worked as a developer. I was a freelance developer and my customer asked myself, can we deploy the back end of our application into a Cert cluster? I was like, well, I know containers, I had a little bit experience with Kubernetes, mostly Coop City I get and Coop City I'll describe but still how hard can it be to just uh work with it. So yeah, let's let's try that.

So I received my cluster and then I started creating the AML templates and to be honest, that was really challenging for me to understand all of this stuff. And that's exactly what we are going to speak to you about today, how you as developers can create Kernes deployments using the programming language, you know, for myself at that time, by the way, it was Python and this is one of the languages that is supported in the tool that we are going to speak about today. And of course, we will show a demo how you can actually host your application inside Kernes cluster.

But before we do this, let us also know you a bit better. So I would ask you to scan this QR code with your mobile phones, please and then answer a short question. How much experience do you have with Kernes? So we will give about 45 to 60 seconds for you to respond to this question.

So there are four options that we suggest, maybe you have never tried it or maybe you're testing it in a development environment right now, maybe you're running it in production or probably you already have multiple Kernes clusters in your production environment.

And while you answer this question, I will just finish the previous story a little bit. Actually, after deploying my workload into Kubernetes cluster, I didn't know that the hard part is, is still to come. So the hard part was networking and role based access control. And when I started creating manifest for that, I managed to lock myself out of the cluster completely. And only the Devos engineer had access to this cluster. So I had to just speak to him to actually restore my access.

So that's why if i had this tool at that time, it would be really great. So now we are going to close the pole in about 5 to 10 seconds. So please send your last. It's not working. Ok. Let's do it different ways. So can you raise your hands? Uh who, who is already having multiple Covert clusters in production? Oh, wow. I think most of you quick question from my side. Do you have more than 50 clusters in your organization? Ok. Oh that's nice. Ok.

So Victor, can you tell us what we see in the industry about this? Yeah, definitely. You know, when i ask you especially, do you have more than 50 clusters? That's what i really surprise from this Lattice State of Kits. So what we see in 2022 huge jump from 2021 many organization decided to deploy more than 50 clusters in one organization in one company and only less than 12% of organization deploy five clusters or less.

But you know, still i have a question why s why people selected Cis to deploy the containers and you know, like when we start working with our first containers. it sounds like really easy. we can run on our laptop, for example, or uh up and running in our, we see two instance and run our containers. but when we need to add a integration just like communication between containers or for example, storage layer or anything else, we definitely need some container management system and coitus, it's very popular. one of the favorites from my perspective, fish is it's open source and has huge communities. so we can do multiple different stuff with cerni.

but you know, we create a lot of cub cluster. definitely we deploy many micro services inside in this cerna cluster. but how exactly do we manage any objects of cerni and the answer? of course, yamo. so you remember i'm the engineer. yeah. and you know, i really like yam sometimes i would say um you know, when you open the ammo file, it's very easy to understand what's going on because it was created for human. it's universal for me. like for example, again, as a develops engineer, i create my mania for kernis, i can open my cloud formation and continue to work with yaml. i can configure and manage my c i cd pipelines and so on. yaml is super universal.

my super favorite fish of yaro, it's decorative. it means like i describe the desired state. for example, cerna is please deploy for me six replica of um my containers and cerna will do it. that's great. but at the same time, you know, yamo is very great when you start the same stuff happen when you start your first containers. but when you need to do something else, like more complex, more interesting solutions, it stay more harder and yaml is not the best way.

you know the first reason why, because it does not follow best practice of software engineering. don't repeat yourself. for example, you need to create multiple deployments, microservices. yeah. and most probably you will copy paste or another part like you need just like to deploy your manifesto in different environments like stage and production and so on. and again, if you use just like only yaml, you do copy paste, i would say 80% and cubits i mentioned it's open source and has huge community and of course, we have some interesting tools inside of the cubits and i have like, i have hope that it will work again, please guys try one more time.

so uh if not, we will do in uh you know, like legacy man here just like and you raise your hands and uh answer on my question. so how do you? no, no, it doesn't work. ok. uh so i will say what's, what kind of reference do you have? uh maybe you apply your yaml files through the cctl bars. do you just like apply in this way? like your custom c i cd? ok. what's about who, who knows what is held and customized? ok. ok. what about cd? i mean, like what today we are going to speak? ok. just like 23456. ok. so you will figure out a lot because, you know, from the ws perspective, of course, he and customized is very popular, but as i mentioned, from development perspective, maybe sometimes we can do it a little bit more effective and yeah, you're absolutely right.

i'm not really surprised that so many hands in this audience uh say like we use customize. uh there are a lot of reasons why. so the main question is like why we use customize because it helps us to implement. the best approach is like, don't repeat yourself like on this screenshot, what we have, we have a base of our deployment, for example, for development environment. and when we need to make some changes, like for example, for staging in staging, we need not one replica but two replicas, we can do it very easy for production. for example, we need to add additional storage layer again, we can patch these changes inside in our base layer.

but still still we have a yam o we have to manage all of the stuff in yaml files from another part. like for example, if you want to deploy some uh series, like for example, monitoring solution or database or something else, most probably you will find some package for that and he it's package manager for cubits, it's like standard de factor. and uh we, if you look at the helm charts, it's still written on yama. so we have to understand what's going on, how to write this yama file and so on.

but from another perspective, he provide fil which is based on gold template. so it means like i can create condition and say if i deploy this film chart to cern which has some values i have to do this on this way. if i don't have these values, i can do it on another way. so again, it's helped me to avoid, don't avoid like duplication of my code copy paste, but still i have to do it in yaml.

but what about my question from my developer team? like for example, from mic, can i do management of my deployment or for example of object of coors without leaving my favorite language like python type script, go and so on and answer. indeed. the answer is city kits.

so city kits stay for cloud development kit for kernes. so basically this is an open source framework that allows you to model kernes resources. and it's very important that those can be modeled as reusable components. so why is this important as victor mentioned so that we don't need to copy paste and modify our yaml templates.

so basically with syndicates, you can store the artifacts of your infrastructure. it's called for kernes in the same way. like you do for your software packages. then in terms of programming languages right now, it supports four of them. that's uh python typescript, java and go and to use cds, you don't need to install anything into your cerne as cluster. so it runs locally, you install the cli and the cli actually generates yaml templates for you.

so if you look at the flow in more details, it all starts with you installing this common line interface and then you initially say you initialize your project. so basically, it creates the boilerplate code of cate for you. then inside this code, you start writing your own source code that represents yokota objects. and in the end, you synthesize the actual manifests again using syndicates cli.

so finally, of course, you need to apply this manifest to your cluster. if this is a development or st environment, you would probably go with just coop ctl apply. but for production, we definitely recommend to use more automated approach like the ones that victor mentioned, for example, with g tops, uh you can automate the flow of cd ks as well.

so because cd ks generates the manifest, you can uh commit them to your github repository and then start the whole flow. so let's now take a look at the example of the source code in syndicates. so on the left side, you can see syndicates code for engine x deployment. and on the right side, you can see the resulting yaml manifest, you might notice that they are quite similar.

so we basically define the same properties in both of them. the difference is just in the syntax that we use. so now you might have question. so, yeah. ok. that's great. we can use sates uh to store our artifacts in the same manner. but does it really bring me so much value because i still need to provide all these properties? and the answer is cd case plus.

so let's take a look once again at the source code and think which of these properties we do really need. so apparently we only need to know the image name of our container, all the rest of the stuff like for example, the name of the deployment, the labels, the selectors for these labels cd case can generate for us. and it all comes down to these three lines of code with cd cases plus.

so cd cases plus is a library built on top of cate that provides you the api abstractions to cerne objects. so basically you can use richer api with less source codes or with less complexity.

now, we have reviewed a tool that allows you to manage your cerne workloads with infrastructures called in a high level programming language. but you might have a question. what about my a w services? so maybe from your cert spots, you're using an rds database or maybe a dynamic db table, maybe you need to connect to cloud watch.

so how do you do this? do you need a separate infrastructures code for this purpose? well, potentially the dance is yes, you can do this. you can even integrate cdk with cate. so you can generate cate c as manifests and then using cd apply them to the cluster. but can we simplify it further? so is there a better way to bring these tools together? and again, dance says yes, as you might have guessed from the title of my previous slide, we can do this directly from our kernes cluster.

so basically you manage aws resources in the same manner like you do with kernes subjects. and the tool for this purpose is called a ws controller for kernes or ac. and how the flow looks like you as a developer create manifests. so again, this would be the same yam manifest like you are used to in kernes inside of this manifest. you would use cr ds or custom resource definitions that represent the underlying aws services.

and finally, you apply this manifest to cerne cp i. again, as we discussed, you might want to use coop c tail, apply, you might use github flow or any other way that you prefer. and in the end inside your cluster, there is an ac k controller running and this ac k controller speaks to aws api to actually manage the resources. and i would like to highlight this point once again

So ACK controller speaks directly to AWS API. So it's not using CloudFormation or any other infrastructure is called under the hood. So this means that your Kubernetes cluster will be single source of truth, not just for Kubernetes workloads, but also for your AWS resources.

And ACK is an open source project, it's hosted on GitHub and it is actively developed at the moment, there are over 30 services, over 30 AWS services that are supported. And these are the most popular ones that customers usually use in Kubernetes such as databases storage application integration, like for example, messaging queues and API uh security, for example, encryption, observability and so on.

Alright, just to recap, we have reviewed two tools. The first tool to use high level programming language for your Kubernetes infrastructure is called Pulumi. The second one is to use Kubernetes infrastructures code or manifests to manage AWS resources.

So the logical question is, can we bring these tools together again? Dan says yes. So how you do this is first you install the ACK controller into your cluster and it's very important if you need to manage multiple AWS services from your cluster, you need to install multiple ACK controllers. So each controller is responsible for just one AWS service, then you need to configure permissions so that this controller can speak to AWS API as we discussed, it doesn't use any other tools under the hood.

And of course, there are again, multiple ways how you can do this. But we recommend to use IAM roles for service accounts or IRSA. So what IRSA allows you to do is to map IAM permissions or IAM roles to Kubernetes service accounts. So basically it allows you to granularly assign different permissions. So you don't assign like the same permission to all your Kubernetes nodes, then you import CRDs custom resource definitions into your CDK project.

So that CDK knows how to transform your source code into the YAML templates. There are two ways how you can do this. You can either import the YAML templates for CRDs one by one like I'm doing here on the slide or you can install the Helm package for your ACK controller and import all CRDs at once. So this is something that you will see in the demo later on.

And finally, you can start creating your AWS resources in the same way like you do for your Kubernetes objects. So important thing here is that you don't get CDK plus constructs available. So you get only the CDK code, but it still allows you to manage your AWS resources in the same syntax as you do for Kubernetes.

So again, let's ask, ask this question the third time, can we simplify it even further? And as you have probably seen in the abstract of this session, the answer is yes with Amazon called Whisper.

So what Amazon Code Whisper allows us to do is it helps to generate the source code for us based on the context. So the source code that we have already created and based on the comments that we create in native English. So except for that Code Whisper can do the security scan of our code. Just one thing is that from four languages supported by CDK, Code Whisper supports two for security scanning, that's Java and Python.

And finally, the last point especially important for companies is that Code Whisper can flag or even filter out the code that resembles open source, which can be important for your organization policies.

And finally, before we go to the practical part, how Code Whisper works under the hood, it does this directly from your IDE. So you install the extension of plugin to your IDE. And then when you write your code, Code Whisper, analyzes it automatically and starts generating suggestions for you. Those can be line by line or whole function or whole blocks of code suggestions.

Very important thing is that you should always review the suggestions before accepting them. So you might need to just update them somehow to make sure that they are doing exactly what you intended. So there are three main ways. And again, you will see a bit later on in the demo, you can either accept the whole suggestion, accept it, word by word or just scroll through suggestions and select a different one.

So without further ado let's go to the practical part.

So what's we done? We create our database. I mean, like Mike help us to describe how we can deploy our ideas. So we continue and working with cache. So in cache, that's what to compare with what Mic to show I'm going to apply CDK plus again.

Let's back to Visual Studio and continue to write our code. So first of all, before to create my deployment. I want to define what kind of node I am prefer to select to deploy my web and cache layer. And in my case, I'm going to use node label query, optimize memory. So for me, it will be the best node you can use your own if you prefer.

And we continue to create my first deployment for my for cache layer. So I describe CDK plus, please create for me deployment and just in a few seconds, we will see uh code VS generated for me is the whole manifest of deployment. But it's not exactly what I want to achieve. It's not what I want to do. Yeah. So I'm prefer to apply some of the recommendation from code VS word by word and make the changes where it makes sense for me.

So metadata namespace is absolutely right. But if we continue, I start with containers, so it's my cache layer. Yeah. And definitely I want to define my image again. Could whisper suggested to me to use red good choice. But in our case, I'm going to use custom image, it will be cache. Of course, we have to define the port number and how we are going to communicate with our ideas with our database to do it, we need to provide environmental variables. So Mike already created for me a config map to do it. And in my code, I'm going to get this data and for the first value like db host, I do it by myself so defined where is located? What is the name space? What's the name of my parameter and so on? But as soon as I finish quoted whisper, understand, hey, maybe for this uh this time you need a db port, maybe you need to communicate with your database based on custom ports. Sure. And I'm going to apply the whole line. What's quad whisper generated for me? The next one is db user a is the same for me. It looks good and I'm going to apply these changes. So based on our requirements, what did we have application should be highly available? It means like we need to create at least two replicas and in syndicates plus i define replicas too. Also, we said please to spread this uh ports between different node for that reason that defines the flag spread and true. And finally, it is a late to create our network policy. So let's define our restriction where we are going to schedule this port. In my case, it's a scheduling a track on our memory nodes, what we defined earlier. And the final step is related to the series. So cash will be exposed internally in our kite cluster. It means like we are not allowed external access, it will be only internal and we didn't define any type of services by default. As you're familiar with kites, it will be cluster. So again, let's synthesize our changes and see how it will reflect in our yaml file.

Let's open our manifest and see how it, you know how too much syndicates plus was been written from the left side and how we transform to our yamal manifest on the right side. So on the left side, as you can see, i describe only a couple of lines, but it's not possible to put all uh yam yam of lines to which generated in one screen for that reason. Just like you understand how many lines i have. So schedule of memo optimized node, i put it uh i created and say, hey, can you please schedule my note uh my port on this note and it's created for me not to finish. What else do we have? I said please spread it. And again, it's create port uh anti affinity to spread between different nodes also uh if you are moving on. So uh we will see that in our court, we have environment variables which we got from a config map and this config map was created by mike. And uh we just got this virus from this config map, how we can communicate our cache layer. We'll communicate with that layer. Also, you see like a lot of additional lines in our quote we generated by cate indicates plus for example, security context, it's the best practices. It means like i believe no, no view will apply like to provide access for root in your port. So security context generated plus. So what we also have, we have is a late is late true. So it means like we need to enable network policy, we generate a network policy on the right side in our yahoo file. And the last one again is related to the series, how we expose our cache layer. So we expose only internally and it's created for us cluster ip series object and cited kites. So looks good for me and the final step of our demo is to create a web player.

So what's we currently done? We done with r ds, we using ac k, we finish with cash layer through the cd k plus, we apply it. And finally, it's web layer again, let's back to our um visual studio and start to write our new deployment. As i already mentioned, mike, that's what whisper understands the whole context of what you describe in your file. And in this time, as you can see it's generated for me, more reasonable suggestion, just like uh name space containers and all of this stuff. And again, just like i'm going to apply word by word, why not the whole line? Because for me, i'm going to use the custom port. It will be not 80 which is by default, i'm going to use 6000. Also, i have to define how i'm going to communicate with my cash layer. We expose it to the cluster ip and web layer can communicate it with this cluster ip series. So for that reason, i have to define cache host and cache port. And what else do we have? We remember again from our requirements, highly available uh spread is late and so on. Again, the finally, we need to define where we are going to schedule these spots in our courage cluster. But also we need to say please collect my web layer and cache layer in one knot to avoid network latency. Also, we remember related to the network policy, we create all the network policy which disable for us communication. For that reason, we have to separate rules to allow communication. And i said please allow from web to cache and we will see how it will be generated. And finally, it's a service uh load balancer. In this time, i define the type of my service. It will be load balancer and looks that's it again. Let's synthesize our changes and see how it will be reflected in our yaml file.

In this time, i'm not going to show you like everything was written in our deployment. I want to show you only what's the difference. So in cache layer, we apply our isolation, we apply spread and so on. But uh in web layer, we also define that, please coate together. So let's try to see where we find this code in our yaml manifest again, just like from left side, you see how much lines are written on typescript and how it's reflected in my yam file. So let's find that the difference. And we see that for web coated with cache, we define additional port affinity related to uh uh our environment variable. We also figure out that we have a custom port number for our cash layer and a cash layer. We said please use 7000 ports. So we see the same number as the same configuration. So what else do we have? We say please create uh please allow traffic from web layer to the cash layer. And it means like i need to create network policy ingress and egress for this communication and cd case plus generated for us yam yam representation of network policy on the right side. And finally, it's related to how we expose our series, we expose it through the load balancer and it means like it will be available externally for my end user for my customer and it will create load balancer in inside, in aws and we see the representation of that in our yaml file.

So i think it's a good time just like to apply all of our changes. So how we can do it as usual group ctl, apply the whole changes. So let's do it and open our terminal and i'm going to apply the whole file what we generated uh which is based in our d uh folder. And we see that all resources currently is created i'm not going to go through every resource and show what's going on. But i want to demonstrate and show you only one resource it's related to our r ds to all of data layer. So let's try to figure out what's going on currently with our r ds database to do it. I have to describe the state of my kernis object uh q tt l describe and of course, our object based in custom name space. For that reason, i have to define where, where it's located and copy the idea of my object. So let's copy the object of my earliest instances and paste in our terminal. And of course, i don't want to show you the whole of description. I want to show you line by line. So what we see 20 gigabytes of memory, favorite instance of mic graviton is of course and what's the status of our current database? So as we will see just like in a few seconds here, it's creating, that's good. It means like we just apply our changes and database currently is creating. So that's amazing. Just like we finished with this application.

Yeah, we deployed to production and our application stays very popular. So it means like two replicas, it will be not enough for our application. And we got additional requirements, we have to scale our application, we have to implement scalability of our application and how we can do it in coitus. As usual, we can use horizontal water to killer to define how many reps do we need it. But of course, we also have to provide additional resources like cluster to scalar. And in my case, i am going to use carter but carter distributed by helm package by helm charts. And there is a question, should i revise the configuration of deployment of carter on cd? And the answer is definitely no you can use in syndicates, install your helm charts and configure your helm charts inside a yob nice cluster.

Let's see how this building looks like. For example, if i want to install and apply carver, so i define a new object for helm uh where is located my charts. What's the version, what additional variables, configurations and so on? And i have to also import my crd of this helm chart which will be created uh not for example and so on and so on. What else we have to do? Of course, it's related to horizontal port of the scalar. In cd case we have a separate object which is called horizontal port of the scalar. And uh first of all, we have to delete uh what we defined in our deployment like two replicas, we deleted it and move this to minimum replicas to our horizontal port of the scale. And that's it. We will get our scalable application. We shall be scale based on the metrics what we use currently. It's cpu average utilization and our application will scale from 2 to 100 ports.

And i think it's a good time to summarize what we learn today. All right. So let's first start with cate. So, cate in an is an open source framework used for modeling kubernetes resources. So if we speak in simple terms, using cate, you can create your kubernetes applications also use them as reusable abstractions with your familiar programming languages such as python, java typescript and go. So cate works both with native cerne objects as well as with cr ds custom resource definitions. And then we recommend to apply the results of syndicates. So the yaml manifest that it generates using github's workflow, then syndicates plus is a library built on top of auto generated uh concepts of sates. And uh this library provides high level abstractions for the uh sates objects basically for cerne objects. And uh cd cases plus provides you richer api with less complexity. So you don't need to specify all the parameters of your objects.

Then aws controllers for kubernetes or ac k allow you to define your aws resources inside kubernetes manifests. So basically, these are the most popular services used in cerne s applications like for example, databases and me and cues. And finally, amazon code whisper is an aws service that using artificial intelligence, makes suggestions to your source code. So it makes suggestions based on the context and based on the comments that you create and those can be either line by line or whole function, whole blocks suggestions. Also, just to remind, there is security scanning built into code whisper that can help you to find non obvious security vulnerabilities.

And if you want to learn more about uh the technologies that we were speaking about today. First of all, i would recommend the link on the left. This is the main documentation portal of uh syndicates, sates dot io. And then if you want to review the demo that we were showing today, uh please check out the github repository on the right. There were some changes that we introduced into the demo specifically related to the ac k part to the r ds database. But i think this repository should provide you a good way to start and that was it what we wanted to describe. So hopefully you have seen how you can manage your coordinators applications using familiar programming languages. If you have questions, please feel free to reach out to us directly using the context on the slides, please also fill in the survey for this session in your mobile application as we want to know your feedback.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值