AWS infrastructure as code: A year in review

Tatiana Cook: I'm Tatiana Cook. I'm a Principal Product Manager with the AWS Infrastructure as Code team. I've been with AWS for over five years and I have the pleasure of working with some of our great developers like James to figure out what products and features we bring to our Infrastructure as Code and developer tools.

James Hood: Hi, my name is James Hood. I'm a Principal Engineer at AWS. I've been with Amazon for about 14 years and most of that time spent in AWS, but I've also worked for about five years in retail fulfillment software. So I've been both an AWS customer as well as worked on the AWS service teams building the actual AWS services. Really excited to be here.

Tatiana: So now that we've introduced ourselves, we were curious to learn a little bit about the folks attending the session today. So I'm gonna ask for some hands up for everyone who's still awake. Hopefully, most of you were very early in this presentation. Um so hands up if you are a leader or lead a team at an organization that's all in on infrastructures code, it's like court your processes. Oh, awesome. And how many folks attending here are leading an effort to move more of your team over to infrastructure as code. You're thinking, how can I get more folks using IAC as part of my processes? Ok. Awesome. And then last question, how many of you are hands on with infrastructure as code, your builders? Awesome. Love to see all of those hands up.

Um great. And to give you guys a little bit of a plan for what we're going to be talking about today. We have three big themes we're going to hit on. The first one are fundamentals investments that we're making at AWS in the foundations of infrastructures code covering a lot of the features we've launched over the past year. Then we'll take some time to talk about how infrastructure as code transformation can happen at your organization. What are some of the features we have that can make it faster to move folks over to IAC and help your teams get the most out of infrastructures code. Once you've adopted it, finally, we'll take a little bit of time to, to think about the year ahead and share some of the ways that we're looking at the future of infrastructures code and ways we expect it to change in the year 2024. So with that, let's get started.

James: Great. So first, let's do a quick recap of the AWS IAC portfolio. We like to think of our portfolio in layers and at the base foundational layer, we have the AWS CloudFormation resource registry layer. This is the layer that takes the broad services and deep functionality of AWS and makes them available to IAC tooling as resources that can be provisioned. On top of that, we have the core AWS CloudFormation product. This orchestrates provisioning operations across resources and provides managed experiences like drift detection chain sets, stack sets, etc you interact with this layer using YAML or JSON based templates. And we also included as a separate feature of CloudFormation. More recently launched hooks feature which is our proactive compliance mechanism. And we'll talk a little bit more about that.

Later on top of the core CloudFormation layer, we have a number of abstractions and integrations. We have the AWS Cloud Development Kit or AWS CDK offering abstractions across AWS. We have the AWS Serverless Application Model or AWS SAM which offers abstractions specific to serverless applications as well as AWS Copilot. And more recently, AWS Application Composer. On top of that layer, we have AWS Service Catalog for sharing and provisioning CloudFormation templates and other provision products as well as AWS Amplify, which helps accelerate development of mobile and web apps. Also, that resource layer also solves a common industry problem across IAC tools. So we expose that lower layer through a service called AWS Cloud Control API. So that third party IAC solutions like Terraform Kumi and Ansible can also benefit from this. I'll be talking more about that in just a bit.

So in 2023 we did a lot of work on what we call the fundamentals. And while we did launch many customer facing features, which both Tatiana and I are very excited to share with you today as an engineer. One of the things I love about coming to re invent is that it is a technical conference and so I can give you kind of a behind the scenes, look at some of the engineering work that we do that may or may not make it directly into what's new posts that you may see externally.

So let's dig a little deeper into this resource provider layer first off, what is it? So simply put the resource provider layer converts this declarative state. And if the syntax isn't familiar to you, don't worry. This is just a CloudFormation template fragment. That's saying create an Amazon S3 bucket called test bucket and and enable server side encryption. The job of the resource layer is to take this declared desired state, this desired resource and translate it into the concrete API calls necessary to realize that resource. In this case, maybe it's a create bucket call to S3 a put bucket encryption call.

Now while this may seem simple or trivial in this example, keep in mind that this layer not only supports creation of resources, it supports updating. So you can pass a new template which modifies properties of that S3 bucket and then CloudFormation or your IAC tool will figure out the delta and then send those updates through and make the right update API calls.

So 12 years ago when CloudFormation first launched, we didn't have this registry solution. And originally the resource provider layer was actually built in and implemented inside the CloudFormation code base. As you can imagine, as AWS continues to innovate rapidly on behalf of its customers. Recently in 2022 launching over 3000 significant new features. This creates a treadmill problem and having the CloudFormation team have to keep up with. This is not really a tractable solution.

So the resource registry allows for a federated ownership model, meaning that the service teams like the S3 team or the Lambda team can own their IAC resource definitions and iterate on them alongside their features as they launch.

So why is this important? Well, first, this resource registry is a public service so it enables third party support. So as of today, we have over 153rd party resource types available on the registry from over 20 partner resource providers spanning use cases like application monitoring, databases, instant response networking, and many more.

Second. Again, this is the foundational layer. This resource coverage is the foundation for your IAC tools. We had an enterprise CIO tell us that without CloudFormation support for AWS features or services, we're unable to use those AWS innovations as part of our cloud infrastructure. So having that resource coverage is critical for since IAC is the pre eminent way of managing cloud application infrastructure at scale.

Also this resource model not only standardizes that create update and delete provisioning operations, we also have read and list operations that we've standardized. And that means resources implemented on this registry model are automatically supported for these managed experience features I was talking about before like drift detection and resource import. So you get this consistent experience by resources being on this registry.

Also internally in AWS, we actually have a standard release process for new features and services. And I just wanted you to all know that CloudFormation support is a part of that standard release process. So with a few exceptions, new services need to have CloudFormation support on or very shortly after day one launch and this has been working really well for new services that are launched.

However, this resource, we didn't launch CloudFormation with this registry solution. So like I said, there are a number of pre existing resources that need to be migrated onto this registry solution so that they can continue keeping up with this pace of innovation.

So I wanted to give an update on that. When we started 2023 we had about 66% of resources when you just looked at raw counts on this registry model. However, when we actually looked at usage, there were heavy hitter resources, your Amazon EC2 resources, your Amazon S3, your AWS Lambda AWS IAM resources, common resources that were not yet on this registry model.

So we set an aggressive goal to get to 90% by usage resources migrated onto this registry model by the end of 2023. And I'm very excited to say that we are currently above 85% right now with a high confidence plan to get to 90% or greater by the end of the year. So that's something that, you know, I'm very proud of the team. We've done a lot of hard work on this. And so I actually wanted to share some of the engineering work that went on behind the scenes to make this happen.

Now, this registry model has many technical advantages over the previous embedded implementations of these handlers. For example, the strong scheme of support these read and list handlers, these are new. But what that also means is that this was not a simple lift and shift kind of migration, they're essentially rewrites of these handlers and these are high usage handlers, right?

So for us, it required multiple layers of testing to make sure that we wanted to do this migration seamlessly under the covers where customers did not have to think about this or even worry about this at all. So this required multiple layers of testing. One layer of testing is that all resource handlers defined in the registry have to conform to a resource handler contract and we provide automated testing to ensure that resources do abide by this contract.

Since this was a migration, we also had a very strong emphasis on backward compatibility testing. We used a number of techniques but one that I wanted to highlight, we call it API parody testing. And the idea there is if you think of this embedded handlers, the pre existing handlers and the new handlers as black boxes, theoretically, if you pass the same inputs, then should make identical API calls at the other end.

So we built test harnesses that ran a number of, we used both manual and automated techniques to generate a wide variety of inputs. We ran them through the embedded handlers and we recorded the API calls that were made on the other side and then we replayed those scenarios through the new handlers and and verified that the API calls on the other side were identical.

So this gave us a lot of confidence that our changes were not going to break existing customers and that these changes would be seamless.

Now also recall from earlier that when a customer specifies their desired state, it may result in multiple API calls being made to realize that like in the example of create bucket and put bucket encryption, right. So we develop test harnesses that employ chaos engineering techniques that simulate timeouts and partial failure scenarios to ensure that even in those cases, the new handlers behaved exactly like the old handlers or better if we could improve upon it.

So these are interesting engineering challenges and the team is very excited, not only about this migration progress, but backward compatibility for a managed service is incredibly important. So in building these test harnesses, we not only can verify that backward compatibility is maintained for this one time migration. But moving forward, anytime we change the resources, we should also run these tests and verify that any new changes to these handlers are backward compatible.

Now, I'm sharing this with you one because it's reinvent, it's fun to come and talk about our engineering work that we do behind the scenes, but outside of re invent or just in your regular usage, I also want to highlight that this is kind of the beauty and why in AWS we're so passionate about managed services. You have this whole team of dedicated engineers working behind the scenes about something that honestly you should really have to think about on a day to day basis. That's really our goal is that you can just assume that these resources are going to be provisioned correctly at scale. And even in these strange partial error scenarios will operate correctly for you.

Now, as of October 2023 millions of customers manage application resources with AWS CloudFormation. And while we on the team are, are proud to support those customers, we recognize that not all AWS customers choose CloudFormation as their IaC tool of choice. However, we want all AWS customers to benefit from this work that we've this engineering work we've done. So we realize that this resource layer keeping up with the feature support of AWS and managing this quality is not just a CloudFormation problem. It's a generic IaC tool challenge.

So we created this AWS Cloud Control API it's an external AWS service. You can use it on AWS SDK or SDK. And we expose these lower level resource provisioning operations that create update, delete, read and list APIs across all of the resource registry types. And we've made them available as an external service so that third party IaC solutions, your Terraform your Plume, your Ansible solutions can also benefit from this. And this is not just a theoretical benefit.

For example, Hashicorp who owns Terraform, they are AWS partner and they have had an AWS Cloud Control provider that's been in tech preview and they just posted a blog yesterday announcing that that provider is going to be generally available in early 2024. So definitely watch for that.

So again, I love coming here and talking about all the engineering challenges and I can geek out about this stuff for hours. But we did have a lot of customer facing announcements in this as well. So gonna pass it off to Tatiana to talk more about that.

And when we were thinking about the different kinds of announcements and features we had this year, we decided to structure this around kind of a framework of how are you guys interacting with infrastructure as code? What are the different use cases and problems you're thinking through?

So the first one is maybe getting started. How when you're building a new application, do you figure out what the right infrastructure as code pattern is? Then you get into authoring maybe your hands on keyboard, you're writing YAML or HCL, you go through a bunch of dev test cycles, you get ready to deploy and then you think about scaling up that pattern and deploying it in all of the places where you want your infrastructure to run.

And as James and I were preparing for this, we decided, let's take a look at all of these different feature launches that we've had this year so we can talk about each and every one with the crowd today. And what quickly became clear is that we have been very busy and if we wanted to talk about all of these launches with you, we'd need a lot more time than the hour we have together today.

So what we're going to do is go through each of these themes and highlight one or two of the launches that stand out as kind of interesting changes to infrastructure as code features that we think will be interesting for you to check out if you didn't manage to read through all of these what's new posts over the course of the year.

And the first update we wanted to highlight was an update to Application Composer. Application Composer is a visual builder for infrastructures code that launched at Re:Invent last year. And initially Application Composer supported a set of enhanced components focused on building serverless applications with an interactive drag and drop visual canvas. But what we heard from many of our customers is that the serverless components were really just part of a larger application building with all different kinds of AWS services.

So late this summer, we launched support for all CloudFormation resources as part of this visual building canvas. So that means that you can look for whatever CloudFormation resource you want to build with and drag and drop that onto your Application Composer canvas. The benefit of this is it can help you learn about a new application architecture, understand more about how resources are related or visualize existing CloudFormation.

And we're really excited that as of this week, we have also integrated our visual canvas with Step Functions Workflow Studio. So for anyone who's using Step Functions Workflow Studio, you can actually use their visual building capabilities for some of that workflow logic within the same canvas experiences you have for Application Composer as is shown here, we're really excited about this and you'll continue to see more exciting launches about Application Composer in the days and weeks ahead.

And we think that a lot of this benefit is really to builders who are thinking about getting started. So whether it's someone who's new on your team and trying to understand what a resource relationship looks like or when you're getting to know a new service, this can be a really helpful tool.

So let's imagine you've gotten started in App Composer and now you're ready to get into some of this CloudFormation YAML. I'm guessing based on the hands up earlier, a lot of you do this on a day to day basis and we love to hear from you.

So, something that we have available is this public GitHub repository where we have a CloudFormation language discussion. And this year we launched two improvements to the CloudFormation language that were based off of two of the most requested hottest topics and those include Find and Map enhancements and looping functionality in CloudFormation templates.

And part of what we're curious to hear from you about is what should come next in the year ahead. So where should we keep improving on the CloudFormation language experience? And we encourage you to check that out and contribute there as it is kind of the the winter of 2023 we couldn't talk about authoring without talking about generative AI.

And we're really excited that this year, we also announced Amazon CodeWhisperer support for infrastructure as code and that support includes CloudFormation, YAML CDK, support for TypeScript and Python as well as Terraform HCL. And all of these can integrate to CodeGuru for security scanning. So you can move a lot faster by using CodeWhisper to get started or iterate on templates and infrastructure as code.

Ok. Again, thinking of some of our serverless developers or folks who are building event driven applications. Maybe you got started with a composer, you iterated a little bit. And at this point, you're starting to get into your dev test cycle. We have a new capability called the AWS Integrated Application Test Kit. That's now in preview, that's focused on making test driven development in the cloud a lot easier by helping set up some of the resources that you need.

And this can include helping locate cloud based resources, creating a test harness, figuring out how your time out logic should be set by default for some of these asynchronous feature interactions. And this currently supports EventBridge and it's in preview with Python. But we're curious to hear more from you about what kind of integrated application functionality would be helpful so that we can keep building on the Test Kit over the course of the year ahead.

And at this point, I've kind of gotten a rough application, I've done some dev test stuff and I will hand things over to James to take us through the deployment phase.

Sure. So during the deployment phase. There are some features we launched actually quite recently that I wanted to draw to your attention. One of those is a simplification in the way that resource import works today. So prior to this launch, resource import was always a separate kind of out of band operation.

And for example, internally in Amazon CloudFormation is the de facto standard infrastructure deployment tool. And we also use the full continuous deployment. We do a lot of CI/CD within CloudFormation. So resource imports had to happen out of band of our regular deployment pipeline, which is not ideal in a CI/CD environment.

So we launched this feature which is a new flag. When you create a change set called import existing resources. And for certain supported resources with custom names in your template, it means that import operations can happen as part of a regular update to your to your stack as well.

So you can see an example here where there is a table being imported as well as another table even being modified all as part of the same deployment. This came back from customer feedback as well as our own experiences using CloudFormation in a CD environment.

Another quality of life improvement that I wanted to share with you all is when you have a large stack and or a large stack deployment that may fail, then we've received feedback from customers that searching through stack events and looking for that kind of first failure, which is usually the root cause or likely root cause failure is it can be time consuming and difficult to find.

So on the console, we've added this detect root cause button, you simply click it and it takes you right to the likely root cause. This is something we just launched earlier this month, we've already received quite a bit of positive feedback on it and we're hoping that to raise more awareness about it and we're hoping it saves you time.

So finally, after the deploy phase, the next phase is scaling up usage of CloudFormation or IaC for CloudFormation specifically, one of our core features for scaling up is CloudFormation Stack Sets which extends the capability of stacks enabling create update and delete of stacks across multiple accounts and AWS regions in a single operation.

So we've had many, we've launched many features that are again quality of life improvements based on customer feedback. For example, being able to track regional deployments via the described stack set API or access stack instant drift within the stack set.

Also, we had a failure tolerance feature where for example, you could set a failure tolerance of 10%. But in that case, then stack sets would never deploy more than 10% of your stack instances at a time because it never wanted to go over that 10% value.

We've introduced a concurrency mode where you can specify that that failure tolerance should be treated more like a soft failure tolerance and not a strict failure tolerance. So stack sets can be more aggressive and deploy faster across your stack instances.

Similarly, one rough edge with using stack sets previously was that if there was a suspended account in your AWS organization, for example, it could cause stack set deployments to fail. And now we skip suspended accounts within the stack set.

So again, all of this comes from feedback from customers and we're always working to make your experience better using our products.

So that covers a lot of the fundamentals that we worked on this year. And we're excited to talk to you about what IaC transformation looks like today. And one of the things that we think about a lot is how infrastructures code fits into the different ways that organizations operate.

Some of you may work solo, but I'm guessing a lot of you are part of a team where infrastructure is code is part of a shared process potentially between folks who have different skill sets.

So we're gonna ask for a show of hands again. We, we've thought about some of these different models and one thing that's quite common is this idea of centralized provisioning where you have a central team and all of the application developers come with their application code, maybe they use tickets and they request infrastructure to be provisioned.

Show of hands who here feels like this is what you guys are doing.

Ok, I got a few um... Something that is a hot topic right now is this idea of platform-enabled golden paths, um, or kind of this internal developer platform model where you actually have a dedicated platform team and they're creating patterns for infrastructure that application developers can then view and select and self-serve.

Is this anybody here common with Kubernetes? We hear a lot about Kubernetes folks, ok. Um, in some cases where teams have diverse infrastructure in a single platform doesn't make sense. They'll actually have an embedded dev ops person or a set of people who sit with each team and work side by side with developers. Um, well, the central team thinks about setting guardrails and boundaries. Is this anyone? Ok, got a few.

And then the last model um is this idea of decentralized dev ops and this is actually more representative of how we operate at AWS. And here there's an idea that developers are defining both their infrastructure and application code together and they have a responsibility for making sure that works while a central platform team sets up guardrails and boundaries. So anybody for that? Um great.

And part of what's interesting for us is as you saw in the room here, there really are a mix of models and each of these can be really effective for your organization. But you're gonna want to think differently about how you set your processes up and help your application teams writing application code effectively, interface with whatever team is responsible for either defining infrastructure patterns or setting guardrails.

And we think it's a great time to be moving to infrastructure as code. And we've launched several features this year regardless of your operational model that we think will help make that transition faster reasons that we think it's an exciting year. It's easier than ever to get started with infrastructure as code including for deployed infrastructure.

We have features that make it faster for you to build and can really help unlock developer productivity and we're thinking of ways to make it safer for you to deploy infrastructure to whatever your target environment is.

So in terms of getting started with infrastructure as code, you guys are all here listening or watching um because you think IAC is awesome and you want to use it a lot, but you may not be part of an organization where infrastructure is code has always been central to your processes. It may not have been the way you actually got started.

So you might have provisioned AWS resources that are not currently associated with CloudFormation or CDK templates. In some cases, you might also want to use the management console or have team members who prefer to interact with your infrastructure there and they're clicking around and mutating things and creating resources that aren't part of a template.

Historically, this has been kind of tricky for teams to handle, particularly folks who are adopting infrastructures code now but have a big deployed estate. So we are excited as part of this session to be pre announcing a new featuring CloudFormation that will generate CloudFormation templates for existing resources. And yeah, let's get some applause for that one. We're really excited and we're really looking forward to bringing this to all of your teams.

And what this will allow you to do is actually scan your AWS environment for existing resources. You'll be able to see what's managed and what's not managed by CloudFormation. You'll have features to search and filter AWS will suggest some resource relationships. And once you've selected the unmanaged resources, you will be able to generate a CloudFormation template.

In addition, if you're using the CDK, we have new capabilities in the CDK called our new migrate command that using the CDK CLI will generate CDK L1 code. And we're really excited because we think this will make your transition to the CDK much easier. You take existing resources, you move them to CloudFormation, then seamlessly generate CDK L1 code for that.

And as an example here, as we were working with this feature and prepping for re:Invent and we might have kind of this stack we want to generate called our re:Invent 2023 stack and TypeScript. We run the CDK migrate CLI command and moving over to the S code, you can view the generated L1 CDK code.

Um so we really think that this is gonna help teams move faster to bring things over to infrastructure as code.

Great. So many of you here are leaders in your orgs uh either explicit leaders or leading by example and driving your organization's IaC transformation journey as part of guiding other teams who are potentially not used to using a, you may end up running into common questions or potentially common patterns or habits that need to be changed. We wanted to cover a few of them today.

One of the most common questions is around resource provisioning time where it may look different in an IaC environment. You may have someone who's used to using the console to provision for example, an IAM role. And they say, hey, I don't understand when I go to the console and I click create roll, it returns almost instantly yet when I use IaC it can take a little bit longer to provision this IAM role. Why is that?

So I did want to take a minute and make sure that you have the answers too so that you can pass those along to those who ask you. And in short, the answer is eventual consistency. And for example, with IAM roles, there are kind of two main milestones even though the say the create role or update role API call might return in on average returns in sub second latencies. There's additional asynchronous work that happens behind the scenes. And I like to break this up into kind of two key milestones that a resource goes through.

The first one is I call it configuration complete. And that's where when you call the get roll API, it's going to reflect back the change that you wrote. This generally happens fairly quickly after you've made the right call, but it can be a little bit of a delay before that happens.

The next step is although the get roll API may be reflecting the change that you made. That role may not be ready to actually be used for a role that means assumed until a little bit later. And so there's these two milestones of configuration complete and then the resource being ready for use.

All IaC tools have to deal with this, this constraint. And there are different approaches. Some IaC tools take an optimistic approach where as soon as it hits that configuration complete step, the IaC tool moves on. And there's some edge cases where maybe later in the deployment, some dependent resource needs that previous resource to be ready for use. In which case, there's some error handling and retrial logic to ensure that the deployment succeeds.

CloudFormation, for example, take a more conservative approach, waiting for the research to be fully ready for use before proceeding. And the key difference here, there's kind of pros and cons to it. The key difference is that by the end of the deployment, when the deployment completes, what is the expectation at that point?

With the optimistic approach, the expectation is everything has been configured but possibly some resources may not be immediately ready for use. With the more conservative approach. By the end of the deployment, all resources are ready for use.

And thinking back to the spectrum that Tatiana is talking through. For example, if in that centralized provisioning model, the optimistic approach may be no problem because you have a team that's doing the provisioning operation and then they notify have some sort of human interaction, notification step to notify the application team. And by the time they go to start using the infrastructure, it's all ready for use.

Internally in Amazon again, we use CDK. So when deployments complete, we immediately have integration tests kicking off in an automated fashion. And so this conservative approach works well in that case, because we don't want to have integration tests fail simply because of some eventual consistency, timing window that happened.

So I wanted to communicate that one. So you can also educate others on this. And we have received feedback that for CloudFormation specifically that customers would like more visibility into this. So there is work that's going on to make uh this configuration complete and ready for use more obvious. And also even optionally give customers a chance to maybe state what their preference is.

So in addition to education, the transition to IaC can also involve changing habits as well. Uh especially for teams that are used to using the console to manage infrastructure if you migrate them to IaC, but they still go to the console to make updates that's going to create resource drift and and is not, is a habit that needs to be changed.

We've had a lot of success with converting uh converting those teams over to using standardized source control management solutions most popular of which being GitHub. And this tends to make it easier for those teams to adopt new patterns and new habits.

Now, one thing that we have heard from customers for a while is there's a need to simplify the process of deploying from a stack. So if you've been here, if you're here, you've probably deployed a CloudFormation stack at one point. And you know, you need to first upload a template to S3 and then kick off a deployment and there are automation steps that you can add in for this. But we have heard feedback that customers really like their git workflow and they don't really want a contact switch out of that git workflow.

They may use popular solutions like GitHub or GitLab or BitBucket and they would really like first party support for this git style workflow. So just this week, we announced that we have a feature to now sync CloudFormation stacks from remote git providers. And we're very excited about this feature.

This allows you to trigger CloudFormation operations from git repository actions. So you can now push a commit to either GitHub, GitHub Enterprise, BitBucket or GitLab and trigger CloudFormation stack operation. This feature works by leveraging AWS CodeStar connections to simply link your git repository to CloudFormation, simplifying your deployment process.

Also one thing I'm really excited about this CodeStar connections technology is bidirectional. So not only can we go from GitHub to CloudFormation, we can also go from CloudFormation back to GitHub. And you'll see in this example, creating a new stack, it actually generates a PR a pull request from CloudFormation into the GitHub repository where we see a lot of potential for using this technology in the future as well. So make sure you watch for that.

So transitioning teams to tools like Git are a great way to help them on board to IaC and then you can take them even further by giving them access to tools that help them build even faster.

One common trend that we've seen over the years is that abstractions really enable organizational efficiencies and enable developers to move more quickly. So we see improved velocity because they're able to reuse shared patterns across applications and you can share patterns across your organization.

It also improves developer experience because rather than having to work at the lowest layers and see all of the resources in your application at once. Abstraction allows you to work at higher levels which can be more appropriate when you're putting together your initial application or even just trying to understand an application to begin with.

Also, there's improved safety when you have this set of potentially golden path components that have been tried and tested throughout your organization and just work, they help people to get started faster.

So our solution is related to abstractions. First, we have AWS SAM which is a transform, simplifying the development of serverless applications. It has nine curated abstractions...

This year we launched an abstraction for GraphQL resources as well as SAM connector support for AppSync. SAM connectors simplify permissions by offering this connector abstraction between different resources that manages the right IAM and resource permissions for you.

Also, beyond just CDK, we have the AWS Cloud Development Kit or AWS CDK, which offers over 850 L2 constructs that reflect top patterns supported by AWS services and the community. You can see some stats on a number of releases in 2022.

One thing we're really proud of with CDK is that it is an open source project with a healthy community. We've had over 1300 community contributions this year and a lot of the design for CDK and design work is done out in the open through the RFC process, right on GitHub. We've gotten feedback from customers that they've really benefited from using the CDK as well.

One example is the PGA Tour. Any golf fans here, anyone, anyone? Alright. So the PGA Tour is the world's premier membership organization for touring professional golfers. It co-sanctions tournaments on the PGA Tour along with several other developmental, senior and international tournament series.

They developed a new mobile app and the PGA Tour.com website. They were hoping to give their fans an immersive, enhanced and personalized experience with features like near real-time leaderboards, shot-by-shot data, video highlights, sports news, statistics and 3D shot tracking.

So the Tour built a microservices based architecture with the developer friendly CDK. It used a mix of higher level constructs for critical services like AWS Lambda and Amazon Elastic Container Service clusters and then lower level constructs for say, Amazon DynamoDB and AWS AppSync.

So CDK really provided them with flexibility and agility and it helped the Tour to manage constant change in appearance of the mobile app depending on which tournament was happening. They actually use CDK to stand up a parallel environment for an upcoming tournament which allows them to customize functions and features needed by that tournament without disrupting the current tournament.

When the current tournament ends, then they can flip to the new stack and tear down the old stack. It also greatly increased their release velocity. They went from a biweekly release process that involved a three hour maintenance window at night to multiple releases per day, with the deployment time taking about seven minutes.

They also this increase in agility allowed them to get fixes out faster as well. In one example, they were able to take, they were able to go from identifying a bug to coding up a fix to going through user acceptance testing and pushing it out to production during a live tournament in just 42 minutes, which this was previously measured in hours or even days before that.

So, customer stories like this are really inspiring to us. And one thing that gives customers the confidence in order to deploy quickly, like this is the various safety features that we have built in to our IaC tools.

Great and and this is the theme of we want folks to move fast, but we want them to be really safe when they're doing so. And one thing that we've been thinking about and talking about is trying to use infrastructure as code to shift the way that you're operating from being really reactive and maybe detective in a deployed environment to being a little bit more proactive.

Because when your team is using infrastructure as code, you have like a picture of the future right in front of you, you have your intended state and resource configurations that you can use to inspect and understand other benefits here.

When we think about your experience as developers interacting with infrastructure as code is proactive controls will give you feedback signals sooner, shortening your dev test loops catch issues earlier. And with cloud information, we're thinking about ways we can build in the safety and these proactive controls by default.

So today, I wanted to highlight one of the features we have within CloudFormation called CloudFormation Hooks. And what hooks do is they allow you to configure custom rules in your account that will run against CloudFormation deployments and give you the ability to set logic around whether you want to set warnings or stop a deployment if a rule fails.

And specifically to use hooks, you get started by building and testing a hook locally and then submitting that hook into the CloudFormation registry to configure where it should run within your account. Then once your infrastructure as code template is created and you get ready to deploy a stack that hook will run on a per resource basis, running the custom logic that you've provided and the hook will either succeed or fail.

And based on what you've configured, if it fails, it can either block that deployment or it can send warning signal back. And this is really great because customers tell us that it allows them to catch issues earlier so they can resolve things faster and it gives them guardrails.

So their developers can be a little bit more independent as they interact with infrastructure as code. So classic example would be you can write some hook rules against how you want your S3 buckets to be configured for your organization. For example, blocking public access so that if a template doesn't have public access blocked, you would either get a warning or deployment failure.

New things this year that we're excited about for hooks are a target type filtering feature that allows you to set some filtering conditions for when hooks should be invoked. We have some quality of life improvements around removing some of our dependencies for authoring like docker and we're at commercial region parity with where CloudFormation is available.

So if you're running CloudFormation in a commercial region, you can take advantage of CloudFormation Hooks. And again, we're really excited to see the ways that customers are taking advantage of functionality like this. Specifically, the team Godaddy has set up a platform that they call Stack Safeguard that's integrated with CloudFormation Hooks and this allows them to define rules for how they want their infrastructure to be configured.

And whenever developers go to provision infrastructure through their platform hooks will run check for resource configurations and can block unsafe deployments. If you're curious to learn more, we're really excited to have Damien from the Go Daddy team doing an in depth presentation on how the Stack Safeguard solution came together.

So I encourage you to check out that session. It's happening tomorrow at noon, DOP29 and Damien's awesome. We've been really excited to work with him and his team to understand more about how they're using this functionality.

So wrapping this up, we think that 2022 is a good year to do more with infrastructure as code. It's easier to get started by moving provision resources to CloudFormation or CDK. It's faster to build with a lot of the abstraction solutions that we have and it's safe to deploy and we're going to support more and more safety mechanisms for you to put the right guard rails in place.

And we wanted to close out today by looking forward a little bit and sharing some of the things we're excited about in ways that we think infrastructure's code usage will shift in the year ahead.

Yeah. So look when you think about IaC today, it's likely you think about hand editing YAML or JSON files or HCL files or writing code with programming models like the CDK or Terraform. But over the last few years, we've seen a continual evolution in authoring not only in the ways to author, but in the higher layers of abstraction and we can anticipate what this evolution will look like in the future if we look at the progression so far.

So from authoring infrastructures code files directly. In 2019, we introduced CDK for authoring using programming language. In 2022 we introduced AWS Application Composer with an intuitive visual builder for learning and getting started and then right here at re:Invent, we've introduced generative AI features for IaC authoring.

So with all these different modes and interfaces that you can use to interact with your IaC moving into the future, we see a seamless movement between interfaces and we think about this as like a multimodal interaction.

So you're working with AWS resources, but you might have a few different windows or interfaces that work well, depending on where you are in your infrastructures code process, you might want to get started with the visual canvas, but then move into a text based interface to really understand what configurations look like and use features like Code Whisper integration to provide natural language inputs.

And we're excited because we, we've kind of looked around and we think we launched a lot of stuff this year that puts us in a really good position to make that experience seamless for you with CloudFormation support for IaC generation. You can get all of your resources on board to IaC.

You can start automating everything that you can using integrations like Cloud Sync with GitHub and you can implement safety guard rails with CloudFormation Hooks. So we see kind of these foundations in place allowing your team to start moving faster and faster and thinking about those different interaction modes between visual text based and natural language interaction with infrastructure as code.

So being here at re:Invent, you've already been inundated with many, many new launches and features and there's a lot of change. There's a lot of talk around generative AI and, and potential disruption due to that.

So, one thing that I always like to bring up to when we have discussions as a leadership team as well, actually, it comes from a quote from Jeff Bezos. Way back at re:Invent in 2012, he had a fireside chat with Werner Vogels where he said that he frequently gets asked the question, what's going to change in the next 10 years?

And while it's an interesting question, he said, he almost never gets asked the question, what's not going to change in the next 10 years? And he submitted that, that is actually the more important of the two questions because you can significantly invest and build a business around things that are stable in time.

So he even made jokes about, you know, it was impossible to imagine customers saying I love the Amazon retail website. I just wish the prices were a little higher or I wish the items were delivered a little more slowly, right? You literally cannot imagine somebody saying that. And so you can put heavy investment in that.

And so in these times of rapid change and potential disruption, I always like to think back that quote kind of pops up into my head and, and I think about what is not going to change. So in IaC specifically, some of the things that we see being stable and that we will continue to invest heavily in is that consistency that that foundational resource layer that I was talking about ensuring that we have that coverage, that we have, the quality that we have that consistency available as the foundation for all of the IaC tooling.

Also IaC is and we believe will continue to be the preeminent way to scale usage of cloud infrastructure. So having that scalability having services that can run at scale will continue to be very important in IaC. And if anything in terms of safety, if anything continuing to uh deployment safety will continue to become more important as there are more authoring features.

Like if authoring is accelerated with abstractions and generative AI is also contributing to faster authoring features like rollback and hooks will continue to be focus areas.

And there was one last thing we don't think is gonna change and that's that we really, really like hearing from all of you about what's important and how you're building with infrastructure as code.

So we wanted to wrap the presentation up by inviting you all to continue the conversation with us. So we'd love to hear from you about what you think about some of the launches we've had the past week including our Application Composer integration to Step Functions.

You'll also see some links to App Composer from the Lambda console experience, our pre announce of CloudFormation, template generation and CDK Migrate the sync stacks from git provider and Code Whisper support for IaC.

In terms of how to find us, we are available in a bunch of different channels. One thing that we're excited about is we have a new series that we're streaming on YouTube called CDK Live. We cover the CDK in depth but also other infrastructures, code solutions. So check that out.

We often have CDK contributors or customers joining us there to talk more about their usage of the CDK. And you can also reach out to us on LinkedIn, find us on our public roadmaps for the CDK CloudFormation Language, SAM and others.

And then the last thing that we're excited about is we're launching an IaC Builder Slack community. So if you're passionate about infrastructure as code or have colleagues who want to understand more about what's going on, there were some ideas in here that you wanna chat with us about and share more.

We really encourage you to sign up here. James and I will be there along with a bunch of our other team members chatting with all of you and learning about how you'd like to build.

So in closing, thank you all so much for taking the time. We're really proud and excited about what the teams delivered this year and we have a lot to look forward to in the year ahead. Great. Thank you so much for being here. Really appreciate it.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值