Confidential computing with AWS compute

Good morning. Welcome to Conference of Computing with AWS Compute. I'd like to first begin by thanking all of you for waking up bright and early and choosing to spend your time with us. You have a lot of choices but you chose us. So we hope we make this worthwhile for you.

I'm Arvind Raghu. I'm the specialist responsible for a Conference to Compute at AWS. I will be presenting this session alongside JD Bean, Principal Security Architect.

Here's the agenda for today:

  • We will start by introducing Confidential Compute on AWS.
  • We'll then introduce the AWS Nitro System, AWS Nitro Enclaves and Nitro TPM.
  • We will then deep dive into Nitro Enclaves. We'll talk about how it's operationalized. We'll talk a little bit about what's new with Nitro Enclaves and then close that out with some of the key features and benefits of Enclaves.
  • We'll then piece it all together by walking through a sample data flow and show you how to bring data from an S3 bucket or a data leg of your choice into an enclave and process it.
  • We'll then talk about some of the example, use cases, market segmentation where we think there's a lot of traction for competency computing and then we'll touch a little bit on Nitro TPM and Secure Boot and we'll close it out with some resources for you.

Alright. So what is confidential computing? Before I get into that, can I get a sense of how many of you are familiar with confidential computing? Just quick show of hands. Ok. That's what I thought about. 10 to 20% know about it. The rest are curious to find out what it is.

So what is conference in computing? Let's first, let's first look at what a standard two instance if you will looks like there's the instance on one side, there's the cloud operator who's powering and helping enable and operationalize the instance. All of the different components of your code are sitting on the same instance, whether it's trusted component of the code, untrusted component of the code code that's processing highly sensitive data code, that's processing regular data. Everything is sitting on the instance.

Confidential computing at AWS is defined as the use of specialized hardware and associated firmware to protect data while it is being processed from any unauthorized access. The emphasis here is on data while it is being processed, the mechanisms to protect data while it's at rest and in transit are pretty well documented, mature and have been used for a while. Confidential computing is about extending that support that security for data while it's being processed from any unauthorized access.

But when I say any unauthorized access, what does that really mean? There are two distinct security and privacy dimensions that customers are most concerned about. when we talk about confidential computing.

Dimension one is where customers want to protect their code and data from operators on the cloud provider side, which is AWS in this case. So you have your code and data running on your instance and you want to protect it from the operators on the cloud provider side. The good news is with our Nitro System, with all of our Nitro based instances, you get that protection by default.

Dimension two of confidential computing is where customers want to protect their highly sensitive data and code from themselves from even admin level users on their side or from malicious actors who could pose as admin level users and gain access to the data.

So to recap two dimensions, dimension one is where you want to protect your code and data from us. And dimension two is where you want to protect that code and data from yourself.

But why does this matter now? Why are we talking about confidential computing? Why are you seeing this topic gain more traction? Here's a smattering of some of the data types that are being, that are being classified as highly sensitive data and are getting processed in the cloud, personally identifiable information, encryption keys, health care data, financial data, digital assets, IPs sensitive machine learning models. All of these are being processed in the cloud. This is not new. They have been in the cloud for a while. Customers have been processing this data with sensitive workloads for a while now. But now everybody is starting to get more sensitive about extending their level of security, expanding the posture security posture of their workloads beyond protecting data addressed and in transit. Now we are seeing a lot more need and demand to protect data while it's in use. That's why we're talking about confidential computing more and more. That's why you're seeing this gain more traction.

But how do we bring this to you, the key capabilities that we have today to bring confidential computing capabilities to you?

Number one begins with our AWS Nitro System. AWS Nitro System is the foundation of all virtualization for our modern two instances. That's pretty much all of the instances that we have launched since early 2018, AWS Nitro System addresses dimension one of conflict a compute that i talked about earlier where you get to protect your code and data from us by default. No AWS operator can touch any of your code and data. This is how it's architected JD will talk about it a lot more later in the session.

Next capability is AWS Nitro Enclaves. AWS Nitro Enclaves provides a hardened and highly isolated compute environment which provides additional isolation beyond what the Nitro System provides to protect your code and data that will address dimension two of conference, a computer i talked about earlier and lastly, we also recently launched Nitro TPM, which allows for cryptographic attestation of the health and integrity of your instances.

I'm now going to turn it over to JD to talk to you about the AWS Nitro System.

JD: Thanks, Arvind.

Hi, good morning. So, the AWS Nitro System, right? The topics that we're covering today in confidential computing like we talked about, it's about raising the bar, right? It's about extending the traditional concepts we have of protecting data while it is in transit. and at rest and really raising the bar to begin thinking about um how we can leverage special purpose, hardware and firmware and associated software to protect data while it is in use. This is the future.

In order to discuss the future, we're going to take a brief trip back to the past. So, uh on the left side of the screen, you see a representation uh of classical virtualization if you will traditional virtualization. uh and this is the image that i think that many of us would draw on the back of a napkin if someone asked us to sort of sketch out what a virtualization system looks like. Certainly is what i would would sketch out.

Um you know what you have, there is a uh a system host, a main board server and on that server at its fundamental base level um is a hypervisor or a virtual machine manager, right? this is the fundamental piece of software that is responsible for virtualization for carving up a single physical server uh into uh virtualized uh subsystems, right? uh multiple operating systems that can run on the same hardware at the same time.

But a virtualization system is much more than just the hypervisor or virtual machine manager, right? Customers expect a lot more out of that system and service, right? So you're gonna have aaa number of additional services running on this this board, the system, you're gonna have management and orchestration, observ ability security features. uh all of that is going to run as well.

In addition, a virtual machine generally is going to want to perform some form of io or input output, right. uh the virtual machine is going to want to have access to storage devices or network devices to communicate, to receive data, to record data, et cetera. And all of those functions generally also require additional pieces of software things to interface with external devices and then emulate a virtual representation of that device to present to a virtual machine.

Uh in most virtualization systems like the zen system, for example, you're gonna use a general purpose operating system called dom zero to contain all of these features as well. So you've got a lot of additional software running on this hypervisor uh on this virtual virtualized system before we even get to the really important part, which is the virtual machines in aws parlance, the ec2 instances.

So these white boxes at the top of, of our host here, these are the virtual machines and there's a lot to be said about this model, right? This can be a very, very strong isolation model. So in in ec2 traditionally, right, we've had extremely strong isolation between tenants on our system. We don't do things like share contemporaneous core resources across tenants. Tenants are leveraging the hardware-based isolation of the cpus in these systems to maintain isolation between tenants and between the virtualization system.

That said at AWS, we're always looking to innovate and move forward in order to deliver better value services and features to our customers. So we looked at this traditional model of virtualization almost 10 years ago and we looked and asked ourselves, how can we raise the bar? How can we push this forward to deliver better performance and critically better security for our customers? And we set out on a journey that culminated in the release and announcement of the AWS Nitro System which you see on the right side of the screen.

The Nitro System is a reimagining a reinvention of what cloud native virtualization can look like. Um now I understand it's a relatively simple drawing, but the system itself is um well, quite beautiful. Uh so what we've done is removed the networking, the storage, the management, the security and monitoring from the general purpose operating system from the dom zero that ran on the main system board of the host. We've removed dom zero entirely and we've moved those functions into special purpose, dedicated custom built pieces of hardware, hardware accelerators that are built by anna perna labs. Our internal silicon development uh wing uh many of you may have heard on monday night, peter desantis announced the fifth version of the nitro chip that powers these features.

We've moved these off of the system, main board, creating two independent logical and physically isolated security domains within the nitro system. The vast majority of our virtualization system, the AWS code needed to provide the ec2 system no longer runs on the same physical main board as customer workloads. This allows us to dedicate the full performance of the main of the system main board. That's your intel a md, even graviton um server chip uh to customers and to adopt sort of a a microservices like architecture using physical hardware and firmware that we have built and designed specifically for this purpose.

The only thing that's left running on the system main board is a highly intentionally minimized and custom developed hypervisor that exists to perform the bare minimum features necessary to allocate resources, hand them over to a virtual machine and then frankly speaking, step out of the way.

So let's take a look at the components of this Nitro System. You heard me already speak a bit about the Nitro cards. These are the individual computers, the systems on a chip that work together as a network providing that second domain of the Nitro system.

The bottom half of the last slide, each Nitro car is dedicated to performing specific functions, accelerating those functions for better performance, better security and better isolation from customer workloads. We have Nitro cards providing local instant storage using nvme. We have elastic block storage which is a network based block storage for uh ec2 instances. We also have uh a card that is responsible for providing the network accessibility to ec2 instances. We also of course have a very special Nitro card. The queen of the Nitro cards, which is the Nitro controller responsible for orchestrating and providing the management monitoring and security features of the Nitro system as a whole.

The second component is the Nitro security ship. This is the physical special purpose piece of hardware that is integrated into our system main boards and provides two critical functions. One is that it is able to prohibit any software running on the main system board from writing to nonvolatile storage, right? That means like things like flashing firmware on one of our instances isn't cap isn't possible unless it's been validated and approved.

Secondly, and very, very critically. The Nitro security chip plays a very important role at system boot up time. The security chip is able to allow the Nitro controller to extend its root of trust into the system, main board validating all of the components, all of the nonvolatile storage, all of the firmware on the system at initialization time to ensure that each component is approved and unmodified in state before the system is allowed to ever handle a customer workload.

Lastly, is the Nitro hypervisor. I mentioned the hypervisor a bit earlier. This is a very, very lightweight hypervisor. We have intentionally removed all of the features that do not need to be there. There is no file system, there is no uh shell uh right. There is nothing that does not need to be there. And this hypervisor allows us to provide near nearly indistinguishable performance from a bare metal system. The main task of this hypervisor is simply allocating memory and cpu providing them two instances and stepping out of the way critically each of these critical components in the Nitro system, all the security relevant components can be live updated with nearly imperceptible performance impact. This allows us to never have to ask the question about whether or not a patch is important enough to interrupt a customer workload. That trade-off doesn't have to be made. We're able to update these components and ensure that they are running the most secure and up to date software without ever impacting a running customer workload. Very, very important.

Uh and Arvin mentioned this earlier in the context of confidential computing is what this model has allowed us to do. So, as I mentioned earlier, there are two security domains in the Nitro system. We have the system main board, the server where customer workloads run. And we have the Nitro card domain where the virtualization system functions run on the server with the hypervisor.

There is no interactive access capability whatsoever. The server communicates exclusively with the outside world of the ec2 control plane through the Nitro cars, right? There's no ssh server, there's no network connection, there's no file system. There is no way for an operator to directly access this environment.

Similarly, that pattern is repeated with the next domain. The Nitro cards which are connected to the ec2 control plane also do not have interactive access mechanisms. There's no ssh there's no hopping up and pulling up a shell and and interactively accessing. The only way for an AWS operator to interact with the Nitro cards is through well defined least privilege audited, authorized api s and none of those a ps provide a mechanism to return to an operator, the contents of customer instance, memory encrypted ebs volumes or other customer data.

I do want to underscore that point. This system is designed as a hermetically sealed system.

There is no data disclosing interface, there is no interactive access. This is an engineering challenge.

Right, I think anyone who has worked with software in production knows what it is like to want to pop into that production system and debug and understand what's going on when something doesn't go according to plan. That function does not exist in the Nitrous system.

Occasionally, very very rare circumstances, that does make it difficult to debug problems. It puts a strong emphasis on EC2 engineering teams to ensure that they test and carefully manage their change management process before ever hitting production.

However, we feel strongly that the trade off between the occasional friction of debugging and the security and confidentiality guarantees that this design provides are well worth it. We prioritize customer confidentiality and security over everything else.

So we don't have time perhaps to go as deep into this topic as some might like. I'm very pleased however, to be able to introduce and share with you today that a few days ago, we published a deep dive white paper describing in depth the AWS Nitro system and it's security design.

We cover the three components that I discussed today, the Nitro cards, the Nitro security chip and the Nitro hypervisor and provide deep dive explorations of topics such as integrity protection, tenant isolation, micro architectural side channel mitigations and critically are no operator access design.

If you're interested to learn more, I encourage you to check it out. And ultimately, what I want to leave you with is the understanding that when it comes to that first dimension of confidential computing, right, isolation and protection of data while it is in use from the cloud operator, the Nitro system is the star technological protection for dimension one of confidential computing.

To talk a bit more about how we think about dimension two of confidential computing, I want to invite Arvin back up to talk about Nitro Enclaves.

Thank you JD.

Ok. So we heard about the Nitro system, about how it addresses dimension one of conference computing that I had defined earlier. Let's switch gears and talk a little bit about how we address dimension two, the foundation of which is still the Nitro system that you heard about. But to address dimension two, we have a different feature called Nitro Enclaves.

What is AWS Nitro Enclaves? To understand that, let's first take a look at what a standard EC2 instance would look like. Here is a high level representation of what an instance might look like. Within your instance, you could have your OS, your applications that are running in there to process the data, third party libraries probably. And of course, a bunch of users who could have different levels of access to what's in the instance.

But most importantly, when you want to process data, you are going to bring encrypted data into the instance and eventually have to decrypt it and reveal plain text data to process it. When you reveal plain text data to the instance, now all of the other entities we talked about earlier could potentially have access to that plain text data. And that's really where dimension two comes into play - because now you could potentially have users at your end, who have access to the data, your own software has access to that data.

So how do we protect it? How do you make sure we have that additional protection bolted on to make sure we are protecting this highly sensitive data and code from yourselves? That's where Nitro Enclaves comes in.

Nitro Enclaves provides the additional isolation for data while it is in use. All you have to do from your standard EC2 instance today is carve out CPU and memory resources and create that enclave. So if you're running an instance today, you could dedicate a portion of that instance conceptually by dedicating x number of CPUs and memory that your application is going to require to run inside the enclave.

The enclave is connected to the parent instance from which you carved it out through a secure local channel. It's a highly isolated and constrained virtual machine, which means there is no external network connectivity from an enclave. There's also no persistent storage, there's no interactive access. So there's no root user access. You can't SSH that enclave and see what's going on.

So now that you created the enclave, you could safely drop your application in there and bring encrypted data and decrypt it within the enclave to securely process it. At this point, the whole enclave behaves like a black box to you. You cannot see what's going on inside it, it will process the data. And if you choose to, you can re-encrypt the data and send it out.

What you do with the data inside the enclave is part of the shared responsibility model on your side where we our recommendation is for you to bring encrypted data through the parent instance, decrypted inside the enclave and then re-encrypt the data and send it back out. At no point, we want the parent instance to be able to see plain text data. The enclave is the only one that's ever going to see it. That's the model we recommend as you deploy enclaves for sensitive data processing.

So to recap with Nitro Enclaves, you're getting additional isolation on top of what you already got from the Nitro system to protect data while it's in use.

What is it and what is it not? The enclave is isolated, hardened and constrained virtual machine. It has its own lightweight Linux OS, it has an independent kernel and the enclave has its own encryption keys. It will create its own key pair, be able to distribute the public key. When the enclave is the only one that ever keeps, it keeps its private key.

So if you wrap data with the enclaves public key and send it to the enclave, only that enclave, not any enclave, only that enclave will be able to decrypt that public key.

The enclave looks a lot and behaves like a container but it's not a container. We're using the docker format to build the enclave image file to spin up an enclave. But that's about it. The enclave has no persistent storage as I mentioned earlier and it has no external network connectivity. The only connection that you have from the enclave is a secure local channel to the parent instance. So the parent instance acts like a conduit between the enclave and everything that's external to the instance.

The enclave also has the ability to prove its identity. After you spin up the enclave, the Nitro hypervisor will sign an attestation document which contains measurements about the enclave that just got spun out. It has hashes of the enclave image, the application that's running inside the enclave, the parent instance ID and some other information that all gets packaged in that attestation document.

So the enclave when asked can produce that attestation document to prove its identity and use that to get secrets from external services from which it's going to be able to process the sensitive data that's just been dropped into the enclave.

The enclaves also allow user defined info like nonce - if you want to avoid replay attacks, those can also be added to the attestation document and produced for purposes of attestation.

We leveraged all of this we just talked about to provide you with first class integration with AWS KMS. The enclave can prove its identity to KMS with the attestation document we just talked about - the one that was signed by the Nitro hypervisor and KMS will validate this against a preset key policy.

Only if the document matches the measurements in the document matches the preset key policy will the KMS decide to reveal secrets to the enclave. The enclave can then proceed to get the secrets, decrypt, use that to decrypt the data that's sitting inside the enclaves.

So all along you heard us talk about the capabilities that we have had in place - the Nitro system, Nitro Enclaves - we launched it about two years ago. But what's new, what have we been doing to innovate, what have we been doing to expand the portfolio of confidential compute?

Number one, very recently, we announced support for Nitro Enclaves for Graviton. Nitro Enclaves is now supported on AWS Graviton 2 and Graviton 3 CPU based instances. This allows you to leverage the price performance benefits that come from adopting Graviton based instances along with the sustainability benefits due to the lower power consumption of Graviton CPUs.

With the support, you also now get to use a choice of smaller instance types, a total of two vCPUs. What does that really mean? Let's unpack that a little bit. If you're using an x86 based instance, then one core is actually two vCPUs because of hyper threading. But we don't have that concept at all with Graviton. So one core is actually one CPU.

So if you have applications that are not compute intensive like HSM for Nitro Enclaves, then you could actually get away with just dedicating one vCPU instead of two for an enclave. So this lets you choose smaller instance types for certain types of applications. Again, leveraging the cost benefits of using Graviton based instances.

And lastly, with this introduction, with the support, we now have Nitro Enclave supported on Intel, AMD and Graviton CPU based instances. On the screen, you're just seeing a smattering of some of the Graviton based instances that we have rolled support out for Nitro Enclaves.

And then as recently as yesterday, we now have support for Nitro Enclaves with AWS EKS. What does that? What does that mean to you? Right. Let's take a step back and think about how the how the life cycle of a Nitro Enclave is managed today.

To manage the life cycle of Nitro Enclave, you use a command line tool called the Nitro CLI. Some customers who are already familiar with using Kubernetes for their deployments want to use tools that they're familiar with to orchestrate enclaves along with their Kubernetes deployments.

If they have to do it prior to yesterday, they'll have to write their own code so Kubernetes can orchestrate Nitro Enclaves. But with this announcement, with this launch, we have now made it easy for you.

We now support Nitro Enclaves with Amazon EKS and any managed Kubernetes for orchestrating Nitro Enclaves. You can now use familiar Kubernetes tools to orchestrate, deploy and scale Nitro Enclaves from Kubernetes pods.

As part of this launch, we have also provided an open source tool called Nitro Enclaves Kubernetes device plugin. This device plugin allows the Kubernetes pods to be aware of the enclaves and gives them the ability to manage the life cycle of these enclaves.

We have a lot more information and technical details about this on our documentation page and also on GitHub. I encourage you to go and take a look at this. We're also happy to talk about it after this session with you if you're interested in learning more about Amazon EKS and self-managed Kubernetes support with Nitro Enclaves.

So let me recap a little bit about what we just talked about with Nitro Enclaves. Here are the key features and benefits. I would just bucket it under 3 big topics:

  1. The additional isolation and security you get with Nitro Enclaves.
  2. The flexibility.
  3. Cryptographic attestation capability.

The additional isolation, I think we've talked enough about it - the isolation you get with the Nitro system and then the additional isolation you get with Nitro Enclaves by creating this hardened and constrained virtual machine which has no storage, no network connectivity and thereby ensuring that you have a system that doesn't even allow you access to your own data.

The flexibility is what we just talked about with it being processor agnostic. It doesn't matter what processor based instance you have on your fleet. It could be Intel, it could be AMD, it could be Graviton CPUs. We support Nitro Enclaves on all of them. And also when you carve out the CPU and memory resources for the application, you have a wide range of flexibility, you can choose the combination you want to use depending on the application and the payload and the data payload that you're going to drop inside the enclave. So it's extremely flexible solution.

And lastly cryptographic attestation - enclaves can prove their identity and authorize the code that's running in the enclave to make sure that is the piece of code that should be processing the data that you're looking to process inside the enclave.

The enclave creates its own key pair and also comes with first class integration with AWS KMS.

We have talked about all of the key concepts, how it's built and what it's used for. So let's look at what a sample data flow looks like. But to understand what a sample data flow looks like, let's first understand who the key stakeholders here are.

For simplicity, let's look at 3 big stakeholders:

  1. The data owner - this is the person who has all of the sensitive data, probably the most to lose if the data was to be compromised.
  2. A system administrator who manages the life cycle of instances and who has access potentially to what's running in the instance.
  3. The developer - the one who develops application that's going to process the data, who could potentially have some malicious code running in there.

As you can see from the way I'm describing this, this is a scenario where nobody trusts anybody, right? And nobody needs to. And that's the whole point of developing systems like this.

So to use enclaves, what do you have to do? How do you put it through the data flow? It's a very simple 3 step process if you abstract away everything and take a look at it:

Step 1 is where you fetch encrypted data and bring it inside the enclave.

Step two is what we call send and verify where the enclave is going to send that attestation document that we talked about earlier to an external service like AWS KMS which is going to verify the station document, decide if it's going to reveal secrets to the enclave or not. And step three is where the enclave receives the secrets. So you can then go ahead and proceed to process that data. Are we good? I'll now turn it over to JD to continue with the data flow.

Great. So we have our parties, we have a problem. Let's go ahead and see how this plays out. So a few concepts here to lay out um we're using Amazon S3 as an example here. This is an object storage layer, but really there's nothing special about S3. We're really just thinking about a persistent store for our sensitive data in encrypted form. The other concepts that I want to highlight is that of envelope encryption, envelope encryption is a very common pattern in which data is a larger amount of data is encrypted using a data key. That data key is itself encrypted under a key that lives in a key management system. In our case, AWS KMS which uses uh FIPS validated HSMs to manage keys.

So our data is envelope encrypted stored in Amazon S3 and is fetched by our parent instance brought into the instance. And an enclave is deployed. The parent instance cannot access the data because it is encrypted at rest using keys that it cannot access. However, if that encrypted data is sent over the secure local channel. The VSOC to the enclave application, the enclave application can decrypt it and it decrypt it as follows the enclave requests from the Nitro hypervisor. A signed attestation document.

The attestation document includes the specific measurements that are unique to this enclave and particularly important. One of those measurements we'll call them a PCR value uh is PCR zero PCR zero represents the bit for bit exact code that is used to boot this enclave. This is the exact application cryptographically measured and signed. Additionally, it includes a public key which corresponds to a private key that has been generated by this enclave when it starts up, the enclave can then make a request to KMS, send the encrypted key along with the attestation document encrypted in transit proxed through the parent instance up to AWS KMS.

Now KMS uh as Arvin mentioned has a first class, very, very convenient integration with Nitro Enclaves attestation documents. KMS knows how to extract that document from the request, how to parse it, unpack it, how to validate its signature, how to um pull out the PCR zero value and to check that value against the policy for the KMS key, which allows decryption only from this specific application running in a Nitro Enclave.

So once KMS completes all of those steps validates the document confirms that this enclave is permitted to decrypt this key, it will decrypt the key and re encrypt it using the public key provided by the attestation document. So this key has now been re encrypted at the payload level and sent encrypted in transit back through the parent and in to the Nitro Enclave. The Nitro Enclave now has direct access to this data and can process it to derive business value to answer critical questions or to perform sensitive operations.

Now, before we move in to the question of ok, well, what are the use cases? What do we do with this data now that we've securely decrypted it and brought it into this isolated Nitro Enclave environment. I want to highlight one thing which is, you may have noticed there's some similarities uh in the model of a parent EC2 instance and a Nitro Enclave and the model of the two domains of the Nitro System, we base all of what we do or at least 95% of what we do on customer feedback. Customers really, really responded and really, really liked the Nitro System model. But the question that they posed to us was how can we build systems? Like you've built the Nitro uh like you have built the Nitro System, how can we build our own hermetic systems where we can actually work with sensitive data and can comfortably say that there is no mechanism for our operators to access it. That is what Nitro Enclaves enables customers to build for themselves in their own workloads.

So on to that question, what do we do with the sensitive data? What type of data processing? Well as you've seen, decryption of data is often the first step getting your sensitive data securely into your isolated environment. Common activities that customers want to do with, with Nitro Enclaves because Nitro Enclaves is a uh very flexible and general purpose computing environment. It enables customers to do any number of sensitive data processing activities such as um signing uh using a sensitive cryptographic credential but ensuring that that signing process meets and matches prescribed policies uh tokenization and masking of sensitive data and even infer of machine learning models are common sensitive data processing activities.

We see customers from across different industries seeking to solve different business problems. Leveraging Nitro Enclaves. We have customers working in the blockchain space sensitive data processing analytics, um advertising technology and even central bank digital currencies.

Another critical integration and use case for Nitro Enclaves is actually one that is provided by AWS. So we have ACM for Nitro Enclaves which is an integration with Amazon Certificate Manager that allows customers to use publicly trusted and managed certificates vended by Amazon Certificate Manager on your own two instance without ever having access to those private keys for their certificates. This allows you to terminate TLS with a managed certificate directly on your EC two instance using the EngineX or Apache web servers.

As I also mentioned, there's also uh the protection of a sensitive IP. So we've talked a lot about sensitive data, but there's also the concept of sensitive code where the thing that you are trying to protect is the application itself. So machine learning models are a great example of that where customers will invest substantial amounts of, of effort into training a valuable machine learning model and wants to be able to deploy that model for infer and collaborating with others without concern about inadvertent or unauthorized access to that model itself.

So customers can actually deploy that model inside a Nitro Enclave, provide it with inputs and simply receive back the inference outputs we have uh Arvin to come back and talk about a few more emerging use cases and workloads that we're seeing.

Thank you JD. So that machine learning model use case um is a good segue for some of the other use cases that we'll talk about, right? Um because if you really think about it, there could be one per one entity that owns the model, a different entity that owns the data and maybe somebody wants to license it. I want to make some money out of, of licensing that model, but I don't want you to look at it. Enclaves provides the perfect environment to go do that. Drop your model inside the enclave, drop the data in there. Nobody gets to see what the model looks like. Nobody else gets to see what the data looks like but both benefit out of using an environment like that. That's the real value where you're starting to bring multiple parties who don't trust each other, but who could benefit from each other work together some of the other areas we are starting to see a lot more conversation and traction with, with something like Nitro Enclaves or blockchain workloads.

Number one blockchain bridging. What does blockchain bridging mean? If you have multiple different public chains and you're trying to move assets from one chain into another, maybe you want to move something from bitcoin into ethereum or something else, two different chains. When you move it between chains, it eventually has to hop the chain. It's got to hop somewhere. It actually hops on a node. Now, those nodes need to be secure. Those nodes need to make sure that nobody has access to this sensitive data that's coming in. Maybe it's ownership data, maybe it's asset itself that's coming into the node and moving on to the next one. That's what blockchain bridging is. If you've not heard the concept before and enclaves provide a great trusted execution environment for these nodes where blockchain bridging can happen between two different chains. It can happen between two public chains. It can happen between a public chain and private chain. What not another big use case is hot wallets for digital assets, any digital asset ownership transfers for assets could happen inside a wallet.

Typically, wallets have a hot and a cold portion to it. Hot is where all the transactions happen. So all of these transactions that happen where ownership transfers happen, assets transfer happen could all happen inside an environment where nobody gets to see who owns what and how much. And then another big example use case that we are seeing is the signature validator node here. If you have a transaction that's about to happen and you want to make sure it's a transaction that's protected by consensus signature validation happens. And, and a very popular algorithm you're starting to see is multiple key shards where a key key but gets broken into multiple shards. And then each shard is validated separately and then they all get put together in a consensus note, this is what it looks like.

Now you can think about each of these shards getting validated inside an enclave and then eventually a consensus shapes up. That's another blockchain example. But now don't just check what i showed you here from a key shard perspective to, you know, to anchor on what this consensus look like, could be any other smart contract, any other consensus that could happen potentially in a blockchain where you want to protect the data inside a trusted execution environment. Like my enclaves JD touched a little bit on advertisement, advertising technology.

Here is a very, very good example of that. We recently released a blog on this as well. We worked with the Trade Desk on that. It is a tokenization for atch cookies are starting to go away. And now there is a higher emphasis on privacy as well to protect your personally identifiable information. Now without cookies, how are you going to do that? You need an environment, you need an isolated environment to protect customer privacy, customer data privacy. And you want to be able to make sure from business perspective that all of the ads and everything that's getting published is still seamless.

So enter tokenization, right, where the personal information can be brought inside an enclave tokenized using UIID, which is just an example I'm using here. Could be anything else where the data could be tokenized and now used to publish ads now used to make other inferences based on usage patterns that they've been able to track with cookies in the past. Now, those cookies could be replaced by these tokens and still, still function like they, they did like the cookies did enclaves actually provide an environment where there's no persistent storage of any of this encryption keys, any of these identifiers.

So once it comes in gets processed goes out, there's just no more storage, it doesn't persist anywhere else. That's the advantage of using something like enclaves for an application such as tokenization for tech.

And then lastly, I want to leave you with one of the most powerful use cases that we are starting to see which is multiparty collaboration. So we talked about the machine learning model where one person could bring the model, another person brings data. But what if two different parties have data where they could both bring in data to make meaningful inferences with two separate data sets but do not have to reveal each other's data to the other party.

Let's take an example. Let's just take me and there are two social media companies perhaps that have data about me, both know something about me and could probably make another inference based on what each other know about me. But don't want to share the data with each other. They could drop their encrypted data sets inside an enclave run a pre validated agreed upon application that could process the sensitive data and receive the outputs independently to each other without ever having to reveal their original data sets to each other. But they have both learned something new about me perhaps in the process.

That is just one use case, another big use case that we're starting to see come out of this trade reconciliation. Maybe two different governments have data on trade. They don't want to share with each other, but they want to reconcile the trade that's happened across borders. So cross border collaboration, much, much larger scale than what i just talked to you about personal data. All of those are use cases that are starting to emerge. There's a lot more, I'm just giving you some examples to think about when i, when i talk about multiparty collaboration.

Um I want to close this session out by touching on the last topic that we talked about when we introduce confidential compute capabilities, which is Nitro TPM. We launched Nitro TPM. earlier this year TPM stands for Trusted Platform Module. Nitro TPM is a security device that lets you gather a test system state, generate and store cryptographic data and prove platform identity. We did not have this capability prior where two instances instances could attest for themselves. But with the launch of Nitro TPM, two instances now have similar capabilities that we talked about when we, when we introduced Nitro Enclaves to you Nitro T PM is TPM 2.0 compatible.

What does that mean for you if you have workloads on prem today that rely on TPM 2.0 capabilities, you could now much more easily migrate that into the cloud with Nitro TPM because it conforms to the TPM 2.0 specification. And lastly, there are some new use cases that are being enabled on the cloud because of the introduction of Nitro TPM attestation is one of them Microsoft BitLocker is another one, Linux Unified Key Setup locks is another use case.

We do have all of the technical documentation on our website and i encourage you to go take a look at it. If you want to learn more about Nitro TPM, feel free to talk to JD and me after the session will be more than happy to talk to you. We also want to hear from you about some of the custom use cases that you could probably be thinking about as you as you look into Nitro TPM.

I want to close this session out by leaving you with two key resources. In addition to the white paper that JD talked to you about earlier. Number one is to learn more about enclaves, just go do aws slash enclaves. And number two, we have a self help self paced workshop for you that walks you through how to spin up an enclave, how to work with it, how to drop your application in there. How do you do cryptographic attestation that we talked about? You can do all of that on your own. The workshop is, is, is online and anybody can go and use it. Please please take that link and if you want to talk more about, about the workshop or anything about enclaves conference, a compute, we are going to hang out outside and we'll be happy to talk to you and take questions with that. I will close out and thank you for, for sitting listening to us, talk about computer here today.

  • 24
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值