What’s new with AWS governance and compliance

Hi, everyone and welcome to our session. Really glad to have you here and hope you're having a great first day at Rain Event.

I want to start off with a quick story. It's my six year old daughter Swara and she loves riding her scooter. She loves doing all kinds of tricks like racing her friends, going down slopes really fast and then suddenly breaking, going around the cul de sac and stuff like that.

Right. I keep trying to get her to wear the helmet and knee pads, but she hates it. She's like "Dad, let me go and just play with my friends. Like, why are you slowing me down? I, you're wasting my time." Like, "Ok."

And so this one time I let her go without wearing any of the protective gear. Sure enough, she fell down, bruised her knee and lost out on two whole weeks of riding time in summer. That's when she realized that all of this gear isn't slowing her down or getting in her way. It's actually helping have more fun without worrying about injuries, without worrying about falling down and things like that.

The reason I share this with you today is that cloud governance is just like that. When you do it right, it helps your teams and your developers actually do more and actually innovate faster, not slower and it gives them the peace of mind to do so without having to worry about security vulnerabilities, misconfigurations, data breaches and things like that because you have set the right boundaries for them to be safe and innovate with confidence.

I'm Sid, I lead the product for AWS Governance and I'm joined by Andres who leads our solution architect team for Cloud Ops. And in today's session, we are going to talk about what AWS governance and compliance covers as well as a lot of the new features that we have launched this year to make it even easier to use, more scalable and more comprehensive.

So let's get started. Just to level set, the first thing I want to start with is a couple of definitions. Everyone thinks about governance a little bit differently. So I just wanted to lay down our definition of it and the way I think of governance is a set of processes, tools, reports or frameworks to align your AWS usage with your business objectives. And this can cover security, resilience, data integrity, cost control, risk - many different pillars.

Compliance is more focused on meeting regulations, standards or even internal policies that your company needs to adhere to. And when you have a strong governance foundation, it helps you with your compliance because you already have the tools, reports and processes to make that efficient. Right? So these are the definitions we'll use today.

So, with that, here's what we expect. First, we'll meet a team of builders just like yourselves who are trying to understand how to govern their AWS environment. And we'll go through three specific stages. The first one is how they get started, how they scale up and then how do they think about regulatory compliance?

So, Andres and I were talking and we figured that rather than us just talking about one launch after another, we look at it through the lens of a team like a customer like yourselves and a team who is like a cloud platform team trying to govern their environment for a technology startup.

We have three characters we'll use today. We have Richard the Cloud Architect, Maria, a Cloud Ops Manager and Nikki who is a Compliance Manager.

So now Richard is a Cloud Architect and he is really responsible for leading the AWS adoption for his company. So what Richard thinks about is what sort of tools do I need? What AWS services I should use and how I should govern our entire environment. So he's sort of the thought leader for the company adopting AWS.

Maria as the Cloud Ops Manager is more worried about the day to day operations, collecting data about all the workloads, keeping them safe, remediating any risks and so on.

And finally, Nikki, who is the Compliance Manager is responsible for making sure all of their workloads and all of their accounts adhere to the regulatory standards that they are subject to.

Now, you may not identify with any of these exactly. But I hope you can see some flavor or some mix and match of the duties that you are responsible for in your companies. Right. So that was sort of the intent of channeling these three team members.

Let's start with Richard. And the question we are going to help Richard with is "How do I get started setting up a well governed environment?" Right.

So when you think about what a well governed environment in AWS should look like you think through three challenges, right?

The first one is, how do I even think about governance in the cloud? As many of you know, the cloud, is this wonderful place where teams can create instances or databases in any region of the world within minutes, you deploy and tear down infrastructure just by a few lines of code and the data centers are not in your physical control, right? So how do you think about governance in such a space? That's problem number one.

Problem number two is how you architect for now and for the future. A lot of our customers start off by experimenting with a handful of accounts, but then quickly grow to hundreds of accounts in a year or two. And what you don't want to do is have to throw away what you have already built or have to switch or migrate to a whole new set up. When you scale, you want to set yourself up for success and use a foundation that will grow with you as you scale.

And the third one is integrating with existing systems. Now, most of us don't adopt AWS in a vacuum, right? We typically use some other tools and same with Richard. Richard's team uses Okta for their identity federation. They may use Datadog for monitoring, Snake for software vulnerability assessment, and so on. And so he expects that when he adopts AWS, it seamlessly works together with these existing tools rather than having to throw them away and start over. Right?

So these are the three challenges we help Richard figure out.

So how many of you here know the Shared Responsibility Model? Quick show of hands? Great, most of you. So I'm gonna go quick over the slide, but essentially AWS owns security of the cloud, which is the underlying hardware, host operating systems and so on. And as customers, you own the security in the cloud, which means your workloads, networking, IAM role definitions and so on, right. So this model essentially helps Richard understand where he should focus his energy and where he should focus his governance.

Another quick show of hands. So raise your left hand if you have heard of Control Tower. Alright. And then raise your right hand if you have used Control Tower or are using it right now. Great. Keep your hands up for the rest of the talk. No, I'm kidding. I'm kidding. Thank you. Thank you.

Um so, as you may know, Control Tower is our easiest way to set up and govern a multi-account environment. And the best part is that this environment is secure and adheres to AWS best practices right out of the box. And as these best practices evolve, we keep releasing new Landing Zone versions to keep you up to date. So it's really the best way to quickly get started.

And how does it achieve this? It achieves this through four different parts.

The first one is a Landing Zone which is essentially your home in the cloud, you set up, which regions you want to use, how you want your accounts to look and so on.

The second is a set of managed controls. This is where you decide which controls you want to apply to your house and we at AWS author hundreds of managed controls to help you simply enable them and you're good to go.

The third one is automated account provisioning. So with this, you can create new accounts as more teams come on board or you need more workloads. And each of these accounts is configured based on the settings you specified in the Landing Zone. So they're compliant right out of the box.

And if you remember, Richard wanted to use Datadog, Snake and other such tools. The account factory lets you customize those accounts with third party tools. So that way you can configure them into the account, right at creation time. So that when your teams come up to use those accounts, those tools are already there.

And the fourth part is the centralization of any identity, access and logging management. So where your logs flow, which security team has access to what accounts, how your identity groups are managed - all of this is centralized for you across your cloud environment.

So that's how Control Tower helps. And this isn't a one time setup. Control Tower will continuously manage this on your behalf and detect any drift from how you have configured it previously.

So for those of you who haven't yet used Control Tower, I highly encourage you to check it out.

But one of the key problems until 2022 was that Control Tower wasn't available in all of the regions. And so customers who had adopted Control Tower were sort of faced with a choice of, "Hey, do I deploy in this new region where I want to grow to? But if I do, how do I take Control Tower with me?"

Well, I'm really excited to announce that they no longer have to make this choice. Control Tower is now available in all 28 commercial regions and 2 US GovCloud regions.

Yeah. Um the team did fantastic work. And just this year alone, we launched in 13 new regions across the world. And so we are now at 30. Really proud of the team for this one.

Now, as you remember, we talked about Richard wanting to use Okta as his identity provider right now. How does he do that?

Well, Control Tower always had integration with Identity Center and Identity Center in turn, lets you bring in any identity provider like Okta, Microsoft AD, Azure AD and so on. But there was a problem.

The problem was when you set up Control Tower, it sets up some default user groups and permissions for you and then connects to those user groups, right? But customers told us that, "Hey, I already have my user groups and permissions. I don't want these and then it's unnecessary friction to have to go back and delete them and reconnect my own."

So earlier this year in June, we launched the ability to opt out of the default user groups and permissions and you can now use your own tools, your own user groups and permission sets when you're setting up Control Tower.

So this is just another example of how we are reducing adoption friction and helping you sort of adopt Control Tower at your own pace and piece by piece.

Well, great. So now we have regional footprint done and we have identity covered. But Richard, like most of us loves to automate things in code because that's the only way you can scale. That's the only way you can transfer responsibilities between anyone on the team really quick and so on.

And for those of you who have used Control Tower in the past, I'm sure you'll agree that one of the top requests we have had from customers for a while is to expose more and more functionality of Control Tower through APIs, right?

Well, I'm really excited to announce that earlier today, we launched Landing Zone APIs for Control Tower. What this lets you do is you can now create, update, reset or delete Landing Zones programmatically. I'm showing the console here, but you can also do this through any language that's supported by the SDK or through CloudFormation and other tools.

Let's quickly go through each one of these. So the Create Landing Zone essentially takes in a manifest that specifies how you want a Landing Zone to look and we create one for you.

The Reset is useful when your Landing Zone has drifted from your ideal set and then you can quickly bring it back to your original settings.

Update is to update to a later version when available.

And then Delete is self explanatory - is to decommission a Landing Zone. This is especially useful when you have a test or a sandbox Landing Zone and you now want to move to production, you can quickly tear it down and create your real one.

This feature is especially interesting for our partners...

"Because what this lets you do is now have a single sort of manifest for what your consistent landing zone configuration should look like and then quickly enable it for all of your customers and do version upgrades consistently across your customers programmatically. So really exciting launch for partners.

Now, apart from these three key launches, I just wanted to cover a few more that Richard would be interested in. The first two are features we announced last re:Invent in preview, both of them are now available generally. So this is the comprehensive controls management and the integration with Security Hub.

Speaking of Security Hub, it now lets you do a central configuration across accounts regions and OUs so you know, you can easily scale your Security Hub controls to hundreds of accounts and it lets you customize those controls with parameters. So think of like a password length policy, you can now customize it to what you need.

And the last service on that is Organizations, which is our foundational service to help you manage multiple accounts, set policies and so on. We now support SCPs in China region as well. So now it's supported in all commercial regions, GovCloud and China. So you can use them anywhere. SCPs are essentially our access control policy mechanisms.

And then the last one is Enterprise Support. So for those of you who have an Enterprise Support account, when you now create an account in your organization it automatically gets enrolled again, reducing friction for you.

So, just to recap really quick, we wanted to help Richard get started with a well governed account and we talked about how Control Tower helps him get there with multiple services. It orchestrates on his behalf.

Next, we'll go to Maria and talk about how she can manage cloud ops at a large scale and I'll have Andres walk you through that. I know you were expecting Maria, but no, you have to sell for me.

All right. So as uh thank you said, uh as Sid mentioned, uh can you guys hear me? Ok, I can't tell from here. Ok.

Um as Sid mentioned at the beginning, um Maria is very interested in anything operations. Um so you know that if you're in charge of operations in your organization, one of the top concerns is scaling and scaling in, in an effective way cause you're going to scale, right?

Um so she's very interested in that. She wants to know how she can do that and at the same time, uh govern the environments, make sure she has the necessary tools in place uh to meet regulatory requirements and, and so on and so forth. And it is important. that is a very important question.

And I know it's a question many of you um probably have asked yourselves, how do i do this effectively and scale to the amount uh that i think we're gonna scale many customers start with just two accounts in aws. and within a year they have 100 accounts.

We've talked to many of our customers that are in, in that same situation. so the three key things that we're gonna use to guide our discussion, uh, in, in what's new for this use case that maria has, i is she has been able to identify these three things that are important for her.

Number one, quick detection and remediation of risks. i think you all understand what that means. you don't, you can only imagine, you know how important it is to make sure you detect any deviation from compliance standards, even from a security perspective that's critical, right? or more so from a security perspective, uh an s3 bucket open to the world, we know how that can be trouble, right?

The second one is aggregation and processing of data. there's so much audit data coming in, you wanna be able to aggregate it properly, centralize it properly, but also you wanna make sense of it in an easy way. you wanna be able to query it, you wanna be able to get insights from it.

And the third one is an easy and quick analysis of a large data set. now, you have that data, you've been able to collect it. you want an easy way of querying that data and understanding it.

So one of the first things that will help maria in this use case is what we call, uh i like to call a core service for compliance. how many of you are familiar with aws config? right. very well known service.

um for those of you who are not familiar with it, i'm just gonna do a quick um level set. aws config is a managed platform that we provide that tracks all the resources in an aws account. so essentially, once you turn on recording on an aws account, we will track any resources, any resource that is created or modified. we take a picture in time of that resource and uh we keep the history of it for a default of seven years.

so three years ago, you wanted to know how an s3 bucket was configured, you can go back and see that. so from a resource tracking and inventory point of view, you can see why that is super important.

now, the other important thing that config does is that it provides a, a feature called rules and what rules allows you to do is evaluate the state, the current state of a resource against a desired state. and if there's any deviation, you can take action of it. it could be as simple as just flagging the resources non compliant or it could trigger automation to do something about it, to fix it.

so this is a fundamental service for operating at scale when you talk about compliance.

so one of the things that our customers that have been using for con uh config for a while have told us is, you know, we wanna be able to track everything, but there are certain things that we're not necessarily wanna track, we wanna be able to exclude.

now, config has been out for many years and for many years, it has had the ability of letting you decide which things you wanna track. the problem with that approach has been that once you decide to do that, you own the management of that list. in other words, whenever we release a new resource type, it doesn't get added to that list automatically, you have to go in and do it. you can see how that's trouble, right? especially when we talk about hundreds of accounts.

so i'm super excited that um earlier this year, a few weeks ago, we released resource exclusion by t. so this is pretty cool. what this allows you to do is say, you know what and for resource, i don't know. um ec2 instance, i don't wanna track it. i wanna exclude that for this specific account.

um and you can see there, there's a little animation there that shows how you enable that. now, why is this important? well, in some environments like development, environments or environments that may not have all the neces uh are not bound by many regulatory frameworks. you may not need to track everything you may not be interested in tracking everything you just want to exclude a few resources and, and you're happy with that. this makes you use the service in a more efficient way.

the other reason why this is important is think of ephemeral workloads like ecs like eksemr glue those workloads launch a large set of ec2 instances in rapid succession. now, if you're tracking every single resource that is created, you can imagine how many uh resource configurations that's gonna create.

so what this allows you to do now is allows you to say, you know what, exclude easy two instances. why? because for those type of workloads, you're more interested in the definition of the task or the service or the pod or the job in glue. that's what you're interested in and not in the instantiation of the resource because that's ephemeral, that's gonna go away. you already know how it's gonna look. so you don't need to track every single, every single time it's launch.

so this is uh super powerful um for those use cases. however, there's another thing that was pending. customers wanted to also track certain things periodically. so every 24 hours, i wanna be able to run a full inventory of all my resources and see what's out there.

well, i'm excited to announce that we just released periodic recording on aws. config what that allows you to do is now you can specify resources that you want to track periodically every 24 hours. so you have two options continuously, which is the one that has always been available. and now periodically this gives, gives customers even more power to um tailor their configuration for specific workloads, especially if m workloads or the type of environments where they need. uh they don't need that uh granularity for the resource tracking.

now that addresses the challenge of making sure we're tracking everything that maria is able to track all the resources in an efficient way. but there was another challenge that we mentioned a few minutes ago when we started talking about maria is that you wanna be able to um give the ability to search that data to everybody. make it easy, make it simple.

we've had for some time in aws config a feature called advanced query. how many of you know about aws config advanced query? ok. so i'll tell you advanced query for those of you who don't know, allows you to write sql syntax queries on that resource data. show me all the ec2 instances that have ebs volumes unattached or i'm sorry, not the easy two instances, but all the ebs volumes that are unattached from an ec2 instance. you can write in sequel syntax dot query. we return the data for all the accounts and regions.

you want customers want it more. not everybody is an expert on sql. i mean, i can dabble in sql but every time i'm gonna write a query, if you're like me, uh i'm i'm out in google trying to figure out how to write it and how to create the, the right syntax.

the other challenge is not everybody knows the schema we have it published out on github. but to understand all the nuances about the schema of the data of config you have to learn it and that it's, it's, it's a challenge, right?

so i'm super happy that we have just released nop for advanced queries. so what this allows you to do is write in plain english, what you want to search for and we use generative a i on the back end to construct the sql query that you need um to extract that data. this is a super powerful feature and uh we're sure that it's gonna democratize the ability of using advanced query. you don't need to know sql. you don't need to understand the schema. you just write what you want and it'll return it for you.

Let me do a quick demo of it, demo, demo time. This is the fun part. Let's see. There we go. Can you see my screen? Yeah.

All right. So just to tell you how you get there, this is the config um console. And uh we have here uh in advanced queries right here is this big enough. Let me make it bigger. That'll be easier. There we go, maximize this. There you go.

So here we have advanced queries. Ok? And by default, the first thing you see is uh a list of the c am queries that we provide uh most common use queries we uh we have there. But if you want to create a new query, you can click on this button here.

Now you can search either just that account and region or you can search the aggregator. What the aggregator does is that it collects um conf fake data from all accounts and regions in an organization or a list that you provide of accounts if you wanna do that.

Um you can create as many aggregators as you as you want, if you want to segment the data uh for different teams. Now, here's the new thing, right? Where we can uh um just ask a question in plain english and i have here a couple of queries"

I'm horrible typing. So I'm just gonna copy and paste.

So show me all in five large instances. So I just click on generate. This is the moment where I get very nervous because this is a demo and we're live, right? And there is the query. It has constructed the query taken into consideration the schema everything. And I can just say, ok, that looks good, populate to the editor and it will pop in here, which is the editor for the query and now I can run it and down here, I don't have any 55 larges on that account, I guess.

Um let me show you another one. Hopefully this one will work. I know the query works. It's just that I don't have any instances there.

Um show me all these two instances with a tag of patch group and the value of account, which is my patching um patching group for a set of instances. Let's generate the query for that. Again, the key thing here is that you don't need to understand how the properties are called in the sche uh you don't need to know any of those details. It will just construct the query for you.

There it is, let's say, populate to editor and run the query and there you are, we have a couple of instances there. Uh that will return. It's as simple as that. It's very easy. We just wanted to show you how it looks, but we know it's gonna be super powerful to allow anybody to query that data.

Um for config let's go back here. Ok. We did the demo already. So no need for that.

So um another challenge that we identified is the fact that there's a lot of data coming in. You probably remember that audit data, a lot of audit data. Now, many of you are probably familiar with cloud trail. Everybody knows that service. It's a managed service that we provide to collect all the audit data for your aws accounts, all activity through api s through the console gets sent into a log and that log is delivered to an s3 bucket.

Now, traditionally, our customers have made use of that data by using partner tools or by using athena, right? And you can query the data and use it to investigate an incident. It could be a security incident or you're trying to do um some forensics on an operational issue. You can use um cloud trail for that. But what we heard from customers is we want, we wanna be that to be easier, right?

If you know, if you use cloud, you know that in the council, we have a feature called event history where you can create um pseudo queries by specifying certain fields and telling it which value you want customers really love that, but it they wanted it to be more powerful.

So we created cloud trail lake. How many of you are familiar with cloud trail lake? This was released more than a year ago. Excellent. Many of you are not. So what cloud trail lake is, is basically a managed platform where we take the the cloud trail data that is produced and we etl that data, we partition it, we get it ready for search, optimize for search and we give you a console in the lws console so that you can query it very easily.

So it simplifies the process of making that data available to anybody to, to to query, to search it, to get insights uh from it. It's completely immutable and uh um you can do multi-account multi region and it works also for data events in cloud trail super powerful.

Now what we heard from customers is and and this goes back to that use case of, you know, i have data coming from all kinds of places and sometimes i wanna be able to correlate that data with cloud trail.

So we heard customers wanted to put all the kind of audit data and have the ability to search it so easily. So what what we did is earlier this year at reinforce we released support for non aws sources in cloud trail lake specifically.

So what that does is now we provide integration with some of our partner tools uh partners like crowdstrike whizz. Now you can just go in add an integration and all the audit data from those partners comes into an event data store in cloud trail and you can correlate it when you're investigating an incident. Super simple, very easy to do.

The other thing you can do is create your own custom integrations. Those custom integrations can send data from anywhere you wanna send data from an ec2 instance, you wanna send data from an application that it's running on uh on on premise, you could do that and then you can correlate the data on queries with cloud trail. A super super, super powerful customers love it, but they came to us and said, you know what?

This is cool and i'm using cloud lake now and i love it, but i have other data already that's sitting on an s3 bucket or sitting somewhere else that i'm querying with athena. And i wanna be able to correlate it with the lake data. Can you help us with that?

Yes. So i'm super excited to announce that we released just released zero etl analysis in amazon, athena for cloud trail la. What does this do? What it does? And i'll show you in a couple of slides how it looks in the console is that when you enable it, when you enable federation on it, it's just a check box. It's just a setting through the api for the event data store.

We in the background automatically go and create a new database in athena and create a table for the event data store where your cloud trial data is. Now, what that allows you to do is that now you can query from athena and create joins with data that you're already querying in athena.

So this is how it looks in the athena console in the data catalog, you see you have this aws column cloud trail that gets created automatically once you check that box and the table name is based on the event data store id that you're using.

The cool thing about this is that this opens up a world of possibilities because in a very simple way, it allows you to integrate all these kinds of data that are coming from different places. Think for example, and we just published a blog today. I encourage you to look for it about how you can take her data cost and uses um uh uh uh reports data and correlate it with cloud trail.

So how much did it cost me that this person launch, launch this instance for the period of the life of the instance? That kind of interesting queries can be built now and the output of that data, of course, because this is running in athena, you can take to quicksight and visualize it super cool, super powerful and i'm very excited for it, but we want to do even more to allow our customers to use lake.

We're so excited about this product and lake pricing is based on ingestion. And when we released lake, we set a standard price for seven years uh and retention period included in the price. But a lot of customers wanted to use l for other use cases, not necessarily related to compliance operations where they say, you know, we don't need that seven years of retention. We need shorter retention. Can you do something with the price?

Well, another new thing, we're excited to announce a new pricing for cloud trail lake. This pricing is optimized for uh ingestion volumes on the 25 terabytes and it starts with one year retention which you can extend if you need to, you can extend it up to a maximum of 10 years.

Here's an example of how it works. So let's say you have a one terabyte of cloud trail management and data events. Ok. Traditionally, this would cost you for the seven year retention period, $2.50 per gigabyte. So that would be $2560 for that one terabyte.

With the new pricing, you pay 75 cents per gigabyte for one year retention. That means that for the same one terabyte, you will have a savings of 70% and only pay $768 for that one terabyte. Should you need to extend it? You can, the data is not gonna be destroyed or anything like that. But this um creates a very interesting use case for those customers that don't need the seven year retention and a more attractive price structure for them.

So exciting stuff with clara lake, but there's more, there's more we released aws config during the last year, added support for 100 and 44 new resource types. And for advanced query, we added in the last year, 100 and 19 resource uh types. And we're going to continue to do that month after month from now on, we'll never be able to catch up because we're always releasing new stuff. But we'll make sure that we have uh we, we, we get to full parody of everything that is released.

Lake. We also um added selective stop and start of ingestion of events. So now you can say you know what stop ingesting, start ingesting again. Maybe you have some test workloads training or something like that, that you wanna use lake for excellent um feature for that.

The other thing i wanna call out is resource explorer. Uh when we released that service, uh it was a single account. Now, uh we released the feature that allows you to integrate with organizations and you can search resources across accounts. And the other thing that was recently released for resource explorers is the ability to search resources by applications you define applications and you can search resources within that application.

So to recap, like i said before, maria was very interested to handle that scale, meet the requirements for compliance, meet the requirements for scalability. So we discussed how aws config with exclusion by resource type, periodic recording and natural language processing for advanced queries can help with that.

We also talked a little bit about uh clare lake to uh use it to make sense of all that data gain insights from that data. We discussed the zero a tl support for with athena and the new pricing that we're very excited about with those tools maria can address better that challenge of scaling and addressing the scale that we know all customers will get to to continue.

We're gonna talk a little bit more about compliance and i'm gonna pass it on to sid for that. Thank you really excited about all of the cloud trail lake. And conflict features. But now let's move on to nikki.

So if you remember nicky was the compliance manager and she is concerned about how do i ensure and demonstrate compliance for those of us who have worked with auditors and in the compliance field, we know that compliance is really a two part problem. Right. First, you have to make sure you actually adhere to the requirements in all of your workloads and all of your accounts. And then you have to worry about how do i generate the evidences? How do i collect all of the evidence across multiple business units, create a report to provide to my auditor. So there's both sides of the problem.

And of course, what makes this harder is that the set of regulations and the requirements within those regulations keep evolving over time and across different regions, right? So especially for our highly regulated customers who are subject to multiple regulations, this becomes a really challenging problem. When you dive deeper, you understand that for compliance managers, you can think of three key problem areas. There's more. But today, we are just focusing on these three.

The first one, going back to the shared responsibility model is how should nikki think about the compliance of the underlying aw services themselves? Right now, her builders are using aws services. She doesn't control that code, but she wants to understand which of the aw services are compliant with a given standard and be able to get those reports to provide to her auditors in turn. Right. So that's the first challenge.

The second challenge is how can she unblock her teams from using the latest and greatest technologies? I'm sure many of us are hearing from our teams that they want to use. Generative a i maybe amazon bedrock. And how do you let them use that? But still make sure you have enough governance in place or enough controls in place so that you don't have a compliance issue. Later.

The third challenge is digital sovereignty. And when we say digital sovereignty, what we mean is a customer having full control over their digital assets where your data is stored, who has access to it, including aws operators who has access to it and where does that data flow at any given time? Right. So these are the three key challenge areas we'll dive into today before we go one by one.

I just wanted to zoom out a little bit and give you a high level view of how aws helps with regulatory compliance overall. And then we'll dive into each of those three. The first one is compliance of aws services

So just like you, we also worry about regulatory compliance. All of our services are continuously being evaluated, they continuously sort of get third party assessments and get evaluated for regulatory compliance.

We then build hundreds of managed controls across multiple services - Control Tower, Config, Security Hub, and so on. And provide them to our customers so that you don't have to write the controls from scratch. You can just turn on these managed controls.

We then have logging and monitoring services such as the ones in CloudWatch, right? We have CloudTrail, CloudWatch, and the services that continuously monitor and log from your accounts.

And then we have compliance focused services. We'll see a couple in future slides - AWS Artifact, AWS Audit Manager, and so on. And last, but not least, we have a vast partner ecosystem where we have multiple partners who have deep expertise, both consulting and software for regulatory compliance. So that's something you can take advantage of as well.

Alright. So now let's dive into the three challenges Nikki had, right? The first one was compliance of AWS services. The two key resources she should know about:

First one is AWS Artifact. This is a managed service that gives you self-service access to all of our compliance reports and business agreements from AWS. So if you wanted to see how AWS is compliant with PCI, as evaluated by a third party, you can go to AWS Artifact, download the report, right?

And in 2023, Artifact launched two new features:

First, well more than two new features, but two key ones that I'm going to talk about today. The first one is they have email notification. So if you are interested in a certain report and how it evolves over time, you can be notified about it.

And second, very important, is they now have self-service access to third party compliance reports. So if you are using an AWS product from Marketplace for example, you can still see the compliance report right in AWS Artifact. So that just helps you accelerate your third party risk assessment process, right?

The second resource is this link I have in the QR code which basically tells you that for a given compliance program, what were the AWS services that were considered in scope? And this is powerful because let's say your builders want to use, I'll just take AWS Glue. Now, is it PCI compliant? Is it not, you can quickly go there and you can see whether it was in scope for PCI compliance of AWS.

A note of caution here is again, going back to the shared responsibility model - the service just being in scope doesn't guarantee that you are compliant no matter what you do with the service, right? Obviously you would still own the data, the encryption, and so on while using that.

So that's for problem number one. Now, for the second problem, I wanted to introduce a service called AWS Audit Manager. How many of you know Audit Manager today? Ok perfect.

So Audit Manager is essentially our managed service that lets you collect evidence across multiple accounts and then generate a report based on that evidence. And you can choose any of our provided assessment frameworks which are in turn based on compliance frameworks or you can create your own with a set of controls.

Once you, once you have set that as well as your scope, in terms of which accounts you want to monitor or audit, Audit Manager continuously collects evidence on your behalf and then keeps it ready for you when you want to generate a report, either for internal analysis or for audit.

Now, the reason I mentioned this service is just earlier this year, we launched a best practices framework for Generative AI on Amazon SageMaker. What this is is a set of 110 controls - some of them, as you see, are AWS best practice calls, some of them are manual that you would do on your end. But it just gives you a very good starting point of how to think about controls in the world of generative AI. And it goes across multiple categories like accuracy, fairness, resilience, and so on.

So frameworks like these really help you unlock the use of Amazon SageMaker by your builders without compromising on the compliance part of it.

Now, next, we are going to talk about digital sovereignty. So again, digital sovereignty is essentially controlling all of your digital assets. And we at AWS made a digital sovereignty pledge back in November 2022 which had these four pillars:

The first one was data residency, which talks about which AWS regions your data resides in and having complete control over that.

Then there's encryption both at rest and in transit and using keys that are stored in AWS or outside of AWS.

The third one was granular access control and this includes controlling whether AWS operators have access to your data or not, not just your own builders.

And the last one was resilience - so how do you make sure that even if you control your data residence to just one region, how do you ensure resilience to natural disasters or disconnections in that scenario, right?

So these were the four pillars and since then, there have been multiple launches towards digital sovereignty. A good example is Dedicated Local Zones that we launched. The Nitro System has now been audited by a third party and approved, and so on.

But today the launch I wanted to highlight is that we have now added 65 new controls to AWS Control Tower that help you with digital sovereignty requirements. These 65 bring us to a total of more than 245 controls that go across the four pillars that I talked about - the encryption, data residency, access control, and resilience.

And so we are really excited to provide these to our customers so you can get started at a much stronger foundation.

The second part of this launch is also super interesting - it's the ability to customize the region deny control. For those of you who use Control Tower already, you'll know that region deny is the key technical mechanism by which you, by which you use Control Tower to specify which regions your builders can and cannot use for their workloads, right?

So you can say for example, I only want workloads in the Americas, don't deploy in Europe or Asia. So on. The problem was that this was at the landing zone level. And so once you specify that, all of your accounts were subject to that configuration.

But customers told us that they wanted more flexibility - sometimes customers have one OU for Americas, another OU for Europe, a third OU for Asia, and so on. So I want to be able to control which regions I can and cannot use at an OU level, right?

And so with this launch, we have added a new control that lets you specify regions at the OU level and lets you specify certain IAM roles as well as some service actions that are exempt from that region deny. So you can say, okay this is a privileged IAM role, this should be allowed to deploy regardless of my region deny settings.

So very powerful feature. And next we'll see a demo of that. I'll have Andre walk through the demo.

[Demo transcript]

So to summarize, Nikki as the compliance manager needed to demonstrate, ensure, and demonstrate compliance. We talked about several services that help with that - Control Tower with the new release of digital sovereignty, Audit Manager with the new framework for generative AI and SageMaker, and Artifact with its new features.

To recap everything we've discussed today:

We started with Richard who was interested in how to get started. We discussed Control Tower there, how it can help to get started in an easy way, the new features that have been released related to Control Tower APIs.

Then we discussed a little bit about Maria and scaling up and we covered things like AWS Config resource exclusion, periodic recording, and natural language processing for advanced query.

And last we talked about regulatory compliance and how services like Audit Manager, Control Tower with digital sovereignty, and AWS Artifact can help with that.

So to go back to the story that Sid shared with us at the beginning about his daughter, you can see that these tools in fact allow you to go faster, innovate, not necessarily slow you down. And we're going to continue innovating to make sure that we can all continue to have fun building great things while we meet requirements related to governance and compliance.

Also, if you want to learn more, there's a couple of things we want to share with you:

Tomorrow at 10:30am I'm going to be doing a demo of all these things that we released, more in depth. Look it up in the catalog, it's going to be at the Expo Center and we're going to be able to dive a little deeper into all of these things that we've shared with you today.

And the other QR code will take you to another session we're doing on best practices for cloud governance on November 29th at 10am.

We want to thank you for being here. Thank you for staying with us and learning about all the new things that we have for governance and compliance.

We want to ask you to please complete the session survey and give us a thumbs up there. We really appreciate that you've been here and thank you again for coming.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值