Scale your application development with Amazon CodeCatalyst

Good afternoon, everyone. Um thanks for joining and excited to uh spend some time talking about our new uh enterprise features and some of the uh advances in code catalyst over the last few months.

Um I'm Kyle Seaman, one of the product managers with the team and this is an honor. Yeah, and I'm Harry Mower. I'm the director of dev ops services at AWS.

And before we get into all the new features, uh I wanna talk a little bit about codecatalyst and where we are today. So it was just about this time last year. Actually, it was this time last year at re invent that we uh uh launched codecatalyst in preview and then earlier this year in April, we made it generally available for everybody.

And what we're trying to do with codecatalyst is make it the best place for you and your teams to uh build plan, create test and deploy applications on AWS and how we do that is by bringing together all the tools that your teams need to go from idea to production in one seamless experience that's managed by AWS and it's deeply integrated to the cloud.

Um coca has a wide range of features and we're gonna talk a lot about them today uh which include things like issue boards to help you, help you and your team uh plan and manage your work. Uh source control management tools that make it easy for developers to collaborate on code changes and review code.

Um we also include managed cloud based dev environments that make it really easy for developers to get up and running and debugging code without ever having to install anything on their laptop. And of course, we include really robust and powerful c i cd tools that make it super easy for you to build tests and employ your applications on AWS.

In addition to all the great features that already existed this week, we've made a number of announcements um and, and introduced a number of new features. One of the ones i'm really excited to talk about is how we've integrated uh Amazon Q directly into codecatalyst.

For those that aren't familiar with Q, it's a digital a I powered assistant that helps you with a wide range of tasks across multiple AWS platforms, including the management console or documentation popular IDES. And of course, now codecatalyst, I have a I, I really believe that this is going to really transform, transform how developers and teams build software in the future.

Uh now developers can go from idea to fully merge pull requests simply by creating an issue in codecatalyst and assigning it to QQ does all the heavy lifting of figuring out what your code actually does. We'll work with you on an approach for how it's going to add that new feature. And then you can actually instruct Q to go all the way to generate the code and have a fully merge pull request.

In. In addition to using Q to uh add features to your applications, we've also harness the power of Q to make it simple for everyday developer tasks like summarizing all the changes in your pull request or creating a digest of comments for an ongoing pull request. So you can jump in and know where you can affect the change.

In addition to um adding Q and codecatalyst, we've also added a number of new features and released uh new capabilities this week for managing your teams as you scale on codecatalyst. One of those is integration with SSO so now you can manage your users on codecatalyst using your existing IDP like you do with all users across all your other systems.

In addition to that, uh we've also included a new concept around teams, additional roles uh for you to better manage how uh your developers interact with each other within a project and integration with VPC. So you can better secure your build and test environments.

Uh one of the key features that i mentioned before is our robust and really powerful c i cd uh workflows. Uh and throughout the year, we've made a number of improvements both through our community efforts with our field, as well as from the product team. And this week, i'm really proud to announce that we've added an additional six workflow actions in including support for deploying to EKS terraform support as well as a number of other ones.

In addition to that, we've also released a new SDK to make it easy for you to run your workflows from external systems and processes.

One of the other exciting new things that we launched this week was a new enterprise plan. We're gonna talk a lot about this during the session here. Uh one of the things I want to talk about the plan is our pricing.

So we introduced a new enterprise pricing tier that makes it easy for you to cost effectively scale your usage of codecatalyst. Uh when you're using the enterprise tier, uh each individual costs $20 up to 20 or starts at $20.

Um and they more importantly, as you add users to that tier, we increase the number of compute minutes and uh dev environment hours so that all of your developers have all the resources they need to do their work without incurring any additional un or unexpected costs.

Each enterprise base gets up to 1500 compute minutes to run your c i cd workflows across all your teams and um uh can scroll and 100 and 60 dev environment hour. So every developer on your team has access to their own development environment whenever they need it.

Uh the enterprise tier includes all of the features that we're gonna talk about today. Uh including a one that we're really excited to and we're gonna dive into custom blueprints so that those who aren't familiar with blueprints.

Blueprints are a unique feature of codecatalyst which makes it really easy for you to uh build applications, start building applications easily. The right way, you simply choose the type of application that you want to build. And a blueprint will generate everything you need to build, test, deploy and maintain your application, including the infrastructure, the application code, the workflows and everything that your team needs to go from idea to production in one codecatalyst project.

Um one of the first things that customers asked us when they saw blueprints was, how can i create my own? And we're excited to announce that with the enterprise tier. Now you can create your own blueprints. And we're also excited to announce and show you today some of the new features that make it really easy for you to maintain your projects, even as your blueprints evolve.

So rather than me continue to talk more about that, why don't? Oh, i'm sorry, i didn't. Why don't i hand it over to Kyle and he can show you a demo of this. Thank you.

All right. So, yeah, thanks Eric. So like i said, um for those of you who have used blueprints. Uh they've always been popular as this concept of more than just a template, but a code generator where you can define your, define your infrastructure as code, you know your starting code as well as the pipelines that will deploy it.

So today, what i'm gonna do is give you a demo of using a custom blueprint now to create a simple lambda stack that conforms to the best practices of your company. In this case, a cost center tag, we're also gonna show how you can make a dynamic, including an optional dynamo db part. And then we'll uh how you can define the regions in which the blueprint can de uh deploy to as well as configuring the aws accounts.

So we'll jump right in. We'll start uh in a codecatalyst space. Space is a container for your projects. Uh it also is where you can manage configurations and settings across your different projects.

So now under settings, there's a new tab called blueprints and this is where you and your team can manage the blueprint catalog and what's available in your space.

So from here, we'll go ahead and create our first blueprint in this space. We'll give the name base base lambda blueprint um stack. And then what we're gonna do is add the details because what this is gonna do is show up in a catalog.

So you and your, your development team will be able to pick the different blueprints that are appropriate for their use case. So give a description. So it makes it a little easier to discover uh add some tags to that.

And the last thing we're gonna do is we're gonna have this selected um is a release workflow. So this is being built inside a codecatalyst. So you get to benefit from our c i cd tooling. And what the release workflow is gonna do is every time there's a new version, it will publish it uh in preview to the catalog and then you'll have the option to make it available for everyone.

So now we jump into codecatalyst uh and we can go look at the source repository that got created for our blueprint. And as as we mentioned, these are just typescript applications. Uh they have a workflow defined in there and where we're really gonna spend our time right now is on the source as well as the static assets.

And i'll show you just a little bit of how to think about these blueprints. Uh you could pull this down to your local machine now and start developing or you can use one of the dev environments that Harry mentioned open in Visual Studio Code, maybe JetBrains um or you can use Cloud 9 in the browser which we'll do for this demo.

So when we select that, we're going to have the option now to open it in a new branch. So we'll do that, that way we can use poro quest and the other features of codecatalyst and we'll name this a lambda.

So, what this is gonna do now is spin up a dev environment. What's really cool is i didn't have to configure my local machine to run blueprints. The dev environment comes preconfigured, it's authenticated into codecatalyst and i can get started quickly.

So what we're gonna do now is uh a bit of a deep dive into what the blueprint looks like. But there's two main areas. Uh we have a source file with our source source folder, which has our blueprint dot ts, which is our application. That's where you define options for the blueprint as well as the, the logic. Like what files are we going to generate the workflows we're gonna build and then of course, we have a defaults so that they work out of the box.

And then we're gonna go into static assets. This is where you can generate a python app, java, whatever you want. In this case, we're gonna do a cd k stack with a lambda folder and then we're gonna create a dynamo one as well. So we can show how you can switch between the two.

So we'll jump back into the IDE and uh spend some time in the terminal. So bear with it. Uh we're gonna clean up the static assets file and basically create our new base folder and what we're gonna do is cd k. And it simplest way to kind of get our whole structure set up for how we want our project to look like. Because really thinking about this, the static assets is what will get put into the developer's project.

So this is where you're really kind of scaffolding this out in one spot, so you can reuse it. So this will stand up the full cd k stack that we need. Of course, it will be a generic stack. Um and we'll start to customize that.

The first thing we're gonna do is actually delete. Um there's an mpm ignoring here, which we'll get rid of because we don't want, we have a overview of that. Um we'll get rid of that guy and then we're gonna add our lambda folder in here. Uh and this is again, super helpful. Everyone probably has everyone who starts up a project copy and paste in their lambda starter. You go. This is a great way. If you have um common patterns that you use, you can just codify this inside this index dot js here. It's the simplest, you know, hello lambda that will return, but it's enough to actually be a running uh code for the developer.

We'll update or get ignored to uh recognize that there's a lambda folder and we'll, we'll make sure those js files get pulled in. And now what we're gonna do is touch the cd k stacks. And again, this could be any, this could be cloud formation, terra form any of those. But we'll do the cd k.

This out of the box comes with a base stack name. Of course, we don't want that. We want dynamic ones. So we'll use templating and i'll show you how this gets used a little later. But basically, we'll have a dynamic stack name that's added into the uh into cd k.

And then we'll actually go into the base stack folder and paste in a simple stack that has a lambda function as a back end, as well as an api gateway connected to that. And you can see at the bottom line there, we have a cost center tag that we're applying that is also dynamic uh based on the blueprint configuration using that templating.

So, you know, very common setup

But the idea here is you do it centrally so that your teams don't have to do it over and over again. Now in the blueprint file, these are the options, these are what you're exposing to your developers to select. So you kind of choose how many options and how far they can go with this.

We're gonna paste in a few for the stack name, cost center. We're gonna predefined some regions and then we're gonna use an environment component that makes it really easy to connect to some AWS accounts.

So with those set up, we just need to import that region component and we'll see it in the UI how this renders up for everyone.

The next step is going into the actual blueprint code. We check for the defaults to make sure it does run. We'll clean up some of the placeholder and add our new defaults, which will be a development environment. We'll give it a default region. We'll give it a cost center of generic. This could be whatever you want kind of from the best out of the box experience.

And then last thing we're gonna do is the blueprint can actually create CodeCatalyst resources. So we want a workflow, right? It's one thing to have your code. It's one thing to have your stack, you need to get that into the cloud. So what we're gonna do is pull in the static asset file. We're actually gonna do the substitution on top of that and we're gonna look for everywhere we see, cost center and stack name, those will get updated to whatever the blueprint options were, those will now be part of the bundle of the code.

And then what we can do is write a workflow that will take that and deploy it into the AWS account. So we'll just go ahead and update our blueprint workflow now. So those unfamiliar with workflows, think of them as they're attached to source repositories. They have many actions that you can run in there and we'll be showing some more demos on that a little later today.

Here I'm making a non push domain, so it's triggered when it pushes, made to the main branch, we're going to run this and it's a simple action to get started. It has a CDK bootstrap which makes sure the account can work with this as well as a CDK deploy. So out of the box, you don't have to worry about any of that. We're, we're doing the kind of connection into the AWS account. The environment tab I showed you earlier does all the management with IM with temporary credentials.

And with that, we have a standing up application now that we can share with our developers. Last thing I wanted to do while we're in here is add the Dynamo option. So we're gonna do is under static assets, add a new Dynamo folder and we're gonna mirror the lib base stack. And what we're gonna do is make one that has Dynamo in it instead of just the API Gateway with Lambda.

And with that set up, what we'll now have is the ability to paste this in. And then what you'll see here is we have the Dynamo stack being imported, we're attaching it into the function and now what we can do is inside of blueprint.ts, we have a new static asset decision to make. So we'll go and update that and then I'll show you what this looks like.

So it's the last step is back into blueprints and then you can go in here and add underneath. So we're doing the substitution, we're pulling in all that code, we've set up our project code and now we're gonna do a check if the developer has chosen to use Dynamo. If they do that, we'll check that folder, pull that file and update the cost center and then blueprints are able to merge those disks. So we do that.

Now, we just need to expose that option to our developers. What we'll do here, we'll call it, use Dynamo under database, you can have other options and then what we'll do is update the defaults to have it as false.

And so that's a high level, straightforward kind of blueprint with a few options. And what we're doing now is we're gonna push that back into CodeCatalyst as part of our development life cycle because we expect platform engineering team, internal teams to be building these and using similar development workflows as feature teams. So that could push us back.

And now we can go back into Code Catalyst to see actually how to make it work with everything else. So back in CodeCatalyst, we're gonna go ahead, we're going to merge our new branch or go to our new branch and create a pull request so we can merge that back into main. So do that here. Create a pull request.

This is depending on your policies, you know, other reviewers or approvers. However, it runs. In this case, we'll just go ahead and review it ourselves. But we're building this pull request. We'll have a history of the change that made with that blueprint and we're showing to version now these blueprints, which is your entire code base, not just packages and you can see the diff here and then we'll go ahead and choose our merge strategy and push that in.

You might recall, we had the workflow release pipeline set up. So what that's gonna do is once we merge this in, this is automatically gonna take this new blueprint. It's gonna bundle it up and it's going to push it into the catalog.

So with that being done, the we'll hop over workflows, we can see it's kicked off its running and it's now pushing a new version up. So it's got a few steps just sanity checking the versions, building it and then publishing it.

So now as an administrator, I can go back into the space settings and from there, I'll see my new blueprint has arrived and what i can do is publish that into my catalog. So we'll choose add to catalog and make the 1.0 our current version.

And so what we've done now is, you know, a little time in the ID, which isn't always great but we did it, we stood up a blueprint that's doing some simple things, taking in some default parameters and options, giving you the choice to use Dynamo or maybe not. And more importantly, giving you the workflow and everything your development team needs to like click a button and have the stack standing up following best practices.

So now let's take a look at the other side as the developer wanting to use this. So back into CodeCatalyst as a developer, I can choose create project and now there's a new space blueprints tab. So if i click that every this is your catalog, every blueprint you publish up here, the most recent version will show up. In this case, we have the one clicking into here. You can start to see how those options are rendered in the UI.

And on the right hand side, we're seeing the code that's actually in a previous state being generated. So for example, the stack name was we we add a uniqueness to the end of it and you can see it's been auto inserted into the code. And in the case of workflows, that environment component i mentioned knows enough to say when you choose an IM role, it updates the workflow to know how to reference that.

So what's really nice is you're codifying these options inside of your project code that are valuable at time of create. But also we'll show you a little bit later as you use your project over time. So with those settings there, i can choose my region i can follow. Kind of this is my pick list of what's been approved for me to do with this project.

And now I can create a new application using that stack. So as a developer instead of having to pull in a bunch of different tools, i just picked a blueprint, configured it to my settings, reviewed it and i have a fully functioning code base along with the workflows and pipelines and everything i need to actually get this into production.

So we'll show is you call, we made a staged workflow which was going to do the CDK bootstrap and deploy. We'll take a look. This is running as you would expect. It was triggered on the first push. And what this is gonna deploy is that first API Gateway. So not only that your developer now has a test point that they can actually use against as well and they haven't had to worry about setting it up. They're using preconfigured AWS accounts and the IM rules that they're allowed to.

So click in here and we can see the hello lambda as expected. Very exciting API. So one last thing i wanted to show is what we've released along with custom blueprints is the ability to revisit your configurations because these are codified by the blueprint, we know what you're doing. And how you can change it.

So in this case, let's say i wanted to add that Dynamo table a little bit later, i can come back into the blueprint configuration view at any point and choose different params. And so in this case, i'm gonna add DynamoDB, i could have changed my regions, my AWS account and it'll give me a preview of what the diff will be.

So you can imagine some of these will be a little more complex. And if i like what i'm seeing here and it makes sense, i can apply it. And what that's gonna end up doing is opening a pull request for the team to review. But again, what we're doing is allowing you to have a blueprint that is best practices with a lot of options that teams can dynamically use as they see fit. Well, you know that they're staying within the guardrails of what's approved.

So we'll see here a pull request was opened and the team can hit that when they merge that in, it'll go ahead and deploy the new stack into the development environment.

So that is a quick look at creating a project from a blueprint. You can see how quickly a team can just get up and running. And then also the ability to actually see the configures you had in that and the way that we track those.

So what i want to do now is take a look at a project life cycle management. This is an area we hear from a lot of customers which is great. I've used a template. I've used something, I've deployed the code. How do I make sure everyone's up to sync? Like how do I know what's happening?

So what we do is at CodeCatalyst, we track the blueprints that are used in each project and we go ahead and monitor the versions that they're on and allow the administrators to see what versions are being used as well as give teams tools to update those.

So let's take a look at project life cycle management. Going back into the, the blueprint code base. You may or may not notice the stack we're using is Node 16. We want to update to 18 stuff like this is a little more easy with a package.json or something. But when you want to start versioning your actual code base and knowing that this, you know, whatever version we give, this is the one that's running on 18. That's what blueprints allow you to do whatever you want. It could be a workflow, it could be the full stack, but it allows you to actually start using versioning broadly.

So what we'll do here is bump that version of Node in the CDK stack. We'll push that back up to CodeCatalyst and now there'll be a new version available in the catalog that the admins can approve.

And so this will come up. We do build, we're actually using our CLI deploy in this demo here depending on your preferences, how you want to push them up. But back in the settings, we now have this new blueprint version under here, you can see there's a new version. We also show all the projects in your space using this. In this case, it's just this one. But you can imagine a project with tens or hundreds of projects using many different blueprints.

So we'll make this new version, the catalog version. And now as a development team inside of my project, I'm notified that there's an update available and at any point and depending on your internal policies and how we want to run that, you can set up hooks but here i can go and update the blueprint and we follow the same path which is opening up a pull request, giving the development team because some of these mergers are gonna get more complex and might need some resolving.

But the goal here is to get everyone up to Node 18. And so in this case, it's opening up that pull request and it'll run and test it in the dev environment before they promote it to production.

So it's just a quick look at a simple way of how you can start to think of a packaging and versioning your entire code base, not just packages and how blueprints with its context of CodeCatalyst projects can help you deploy those in a central way.

And then we come back here and we see that we're using 1.0. So just a quick look at blueprints here. I'm gonna hand it back to Harry now and we're gonna talk a little bit more about actions.

Yeah, just real quick thing. What Kyle shows you, I mean, updating that uh if you had one or two projects, updating one or two projects is a pretty simple task. But imagine you have thousands of projects based on that previous blueprint version. You know, just by doing a couple of clicks, you can have all of them updated to the latest version and you can ensure that your developers are building the way you want them to on AWS.

And one of the ways that we also see um platform engineers and it leaders using these blueprints is in a modular way. Um you can, one of the things that um we didn't just talk about is that you can actually create, apply multiple blueprints to a particular project. So you can imagine you could have um blueprints just for your workflows, blueprints just for your application code and you can mix and match them across the different projects. And we see particularly with workflows, a good area to create a standardized template that you can apply to all your projects. Uh just like internally at Amazon, we try to model pipelines in a consistent way across all projects. Uh we want to ensure that everyone's deploying code or building code to the same standards. So you can imagine that you or your platform engineers can create these blueprints that define what that standard uh pipeline looks like, right? These pipelines tend to change over time. For us. We add new regions, we add new uh rules for things that we learn from past deployments. And one area in particular that continues to evolve is around security and vulnerability scanning.

And so one of the, one of the ways that you can use blueprints to help uh define how your teams ensure that they're up to. The latest security standards is by creating workflows that use workflow actions to do scanning and other types of vulnerability assessment. And so I'd like to invite one of our partners up on stage, Sam Quakenbush from Mentor show you how you can build a blueprint using one of their workflow actions to ensure that your code is secure when it gets to production. Here's up.

Thanks, Harry. I think a smile. Well, as Harry mentioned, I'm Sam Quakenbush. I'm the senior director of field engineering at Mentor dot io. And if you're not familiar with Mentor dot io, it might be because we recently rebranded over the last uh year and a half to better frame what we do. So we help developers um with vulnerabilities in their applications. So we're an application security company solely focused on helping you find and fix those vulnerabilities rather than your open source code or in your custom code.

So that being said, uh you know, Kyle walked you through a lot of different ways to use blueprints. Now I'm gonna show you specifically how to use a Mentor workflow with a blueprint and apply that to an existing project.

So kind of familiar just to refresh what we saw earlier, we're gonna go ahead into our space uh and we're gonna go ahead and create a new space blueprint. So then we'll go ahead and give it a name. We'll call it like a min scan workflow. And then of course, we need to make sure we give it a display name and a description so that our developers can find this in the catalog later on. We'll also make sure we want to give it a tag. And in this case, we're just gonna give it a tag named min.

Now, right now, this description, the idea that we're gonna be doing with this specific workflow and blueprint is that we want to scan anytime there's a pull request to our main branch. So it's very common. Hey, we're moving from development to maine, we want to make sure we have a security scan so that we can enforce our policies or workflows. So as I said, then we'll give it a tag called m and then we'll go ahead and create this blueprint.

So after the blueprints created a as Kyle showed you before, we can use our favorite id e of choice to go and edit this blueprint. In this case, we're gonna use Amazon Cloud9 to go ahead and boot up that uh browser environment and go ahead and work through this blueprint and edit it now it does come with, you know, the default um blueprint ts, but we're gonna end up replacing a lot of that and kind of minimizing it down to just the workflow just like Harry said, you know, you can have mix and match blueprints for different things. In this case, we're really just focusing on the CodeCatalyst workflows.

So it was really quick in the in the demo, but basically copy paste, replace everything. And then I'm gonna go in and basically point out that there is the min product think of this as the application name. This is one of the choices that the developer will have if they want to use the default uh min project name that I'm gonna give it. I could also use this from like an environment variable or the developer can go ahead and choose this when they've applied the blueprint. So that's on our defaults json. And once again, that was also tied into the blueprint dot ts file.

So then um after we go through that, we'll kind of show you some of the different settings inside of this blueprint. ts you'll see down here, it's actually going to affect on every single repository in the project. And then we go a little bit further, we'll see that that actually this happens every time there's a pull request to the main branch. And then as we go further, this may look familiar to you if you've ever used uh CodeCatalyst workflows. You'll see that this is an action as a part of the workflow. This is the mint action version 1.09. And you'll see that there's amend license key associated to it. And then what it does is it creates a serif report and then a little bit further on, you'll actually see that it, we have a security policy. If there is any critical vulnerabilities, just one, it's going to break the pipeline. We're doing this specifically for demo purposes, talk about that a little bit later.

So then i'm gonna go ahead and commit the code and then i'm gonna push it and then it'll be applied to my blueprint. So as soon as i go ahead and push this code, the next thing if you remember, you were falling close along with what Kyle did. I have to go set it in the catalog. So we're gonna jump over to our space catalog and then we'll go ahead and make this available to the developers to use. And once again, we're going to give it a version. So it's not in the catalog now. It is. Uh we, we're gonna get version 0.002. Awesome.

So now we've created this mint workflow blueprint. We're gonna go apply it to an existing project. So in this case, I'm applying it to a very common known vulnerable application. You may have heard of juice shop before it's popular in the OWASP community. Um it's a vulnerable typescript application that you use to teach developers of things not to do. So it's riddled with vulnerabilities. Uh so what we're gonna do is we're just going to apply this blueprint.

And after we apply the blueprint, we go, well, first we need to go to space blueprints, select men scan workflow hit next and then it's gonna show us what the source code changes will be. Apply that blueprint. And then remember this only happens when we do a pull request to maine. So now i actually need to go make code changes. In this case, we've kind of set up a a normal development life cycle. You're in the development branch and you wanna make a code change and push it to maine for a release.

So we'll go ahead and create this pull request. And if you've been following along, what should happen next, our workflow should run and then that workflow should run doing our min security scan. So as Harry pointed out, this was really easy to do for like one project or one application to think about just being the power of this, being able to do it for thousands of applications all at once and apply your security.

So here we are running the min scan because we've, we've done a pull request to maine. We'll zoom in a little bit and you'll see that that what it's actually doing is it's looking at all of your third party libraries and the open source vulnerabilities associated to them and letting you know if there's any licensing risk or known vulnerabilities. Uh so this runs pretty quickly and then what will happen down at the bottom, you see it generates that report and then it's gonna come back and actually fail because we had a critical vulnerability.

So because we failed, we should go up take a look at the report and see, hey, there's, there's definitely one critical vulnerability but what what about the other 10 need to fix all of those? Uh so we can go review the report and we can see the suggestions on how we fix these different vulnerabilities. So this first one and you keep this in mind notice that it says there's no fix, that's pretty typical. Sometimes there may not be a fix next one another common. Hey, we just upgrade to another version.

So when you think about this, um we can either upgrade to a version or there may not be a fix if there's not a fix, that means there's a massive architectural change we have to make because we need to replace that library. So this is kind of a long best practices. The same best practice. I don't recommend breaking on critical just cv ss scoring because there's a lot of other factors you have to take into consideration. Like is this vulnerability actually exploitable? You know, what's the epss or are there public exploits available? How much time is it gonna take to fix this vulnerability versus just applying a pipeline or applying a blueprint like this across the whole organization and breaking pipelines?

So having that security scan easily applied. Fantastic with that. That is the end of the demo. So i'll invite Harry back up to go ahead and close this out. Thank you.

Thank you. Uh so there you saw just one example of how you could use project blueprints to ensure that your teams are building the way you want them to on AWS. These features are available today. They're part of the enterprise tier. I invite you to go to CodeCatalyst at AWS and give them a try if you have any questions. Uh we'll be on the side of the stage to answer any questions and uh hope to see you soon. Have a good rest of the show. Thank you.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值