Data protection and resilience with AWS storage

All right, welcome to rain event guys. Thank you so much for showing up. You all must love data protection and resiliency very much to be here, bright and awake early in the morning. So thank you for that.

I am Paulo Desai. I am the principal PM with AWS. Join with me today are Bisan Sethi, who's also a PM with AWS and Marcos Perez, who's a senior storage specialist.

Now, data is one of the key valuable assets for majority of the businesses. Customers rely on the availability of data to perform their business operations. Today in this session, we are going to cover some strategies around their resiliency. We'll go through a core concepts that we offer from an AWS perspective and then we'll overlay technologies with customer requirements. We'll go through some real life scenarios where customers have chosen one strategy versus the other and then we'll close the session out.

So who's excited? I know I am awesome.

Now, the best applications are the ones where we design for failures, right? We anticipate failures and we design for failures and our CTO Dr Werner Roels always says everything fails all the time. So you can't leave things to chance, we have to design for failures.

And when you think through data resiliency, it is about availability of applications even when there is an intermittent failure or a partial failure or an overload on the system. So when you think through data resiliency, two key terms come up to mine, one is high availability. And when you think through high availability is the concept of ok, if there is a component failure, my application should be able to withstand it. Customers require applications to be up all the time. Which means that say for example, there is a server failure or a network outage. You want your applications to be able to degrade gracefully think about it today. This session is streaming live. But if there is an internet overload, right, I mean that video, there will be a latency in how the video gets transmitted. The quality may not be that great, but you would not expect that the video to stop altogether, right? That would not be so good. People will not be able to see the session.

Similarly, you expect applications to degrade gracefully and recover from any of the outage, any of the load spikes that may happen in the system. The flip side to that is disaster recovery, right? We expect applications to gracefully recover from any situation. However, there may be situations where your entire region is out, your data center is down because of a power outage, maybe there's an earthquake, hopefully not, but maybe there is an earthquake and it takes the region out and the application is down and you do have to plan for such scenarios as well.

So at that point in time, there are two key terms that come to mind. One is recovery point objective, which means what is the loss in data that your applications and your business can withstand and recovery time objective? Which is how soon you want your applications to recover? What is that time frame by which your business should be up and running and keeping those two key terms in mind? You have to think through the strategies that you employ.

For example, you could think through deploying a pilot light scenario, we are going to cover all of these in detail in the latter half of this section. You could deploy active, active, active war stand by our backup and recovery strategy. And what you deploy as a strategy for data resiliency will depend on your use cases and the losses that your business can withstand.

And we have done some research like in my role, i speak with customers all day long. And what I've heard from customers is that they have incurred real loss in terms of revenue, business loss and reputational loss when they have not thought through data resiliency and data availability strategies. For example, I dc means that they have seen the top 14,000 applications incur a loss in billions when they have not thought through data resiliency. On the flip side, if you deploy data resiliency as a key strategy from the get go, you can incur some cost savings as well.

It is important when you think through data resiliency strategies, it's very important to think through the kind of failure scenarios that you may incur the most common ones are human errors, right? So you are trying to configure something early in the morning and you faffing with the configuration and bring your your system down. Very common happens all the time. It's happened to me actually as well. Then the others are network component failures, right? There's an overload and your your server fails. So those are the ones when you think through deploying you design for high availability.

But there could be scenarios where there is an earthquake or the entire region is down, there is a tornado in that region, some disastrous situation where you may face that the entire application goes down. So although you are, you are designing for high availability, you always have to keep keep the other side of the coin in mind, which is disaster recovery. You need to prepare for both and what you choose will depend on the criticality of your application.

For example, a lot of you have flown into las vegas today and you have relied on flights such as southwest and others to come in. Say, for example, the ticketing system for southwest went down or any other airlines went down, the famous airlines that you took, right? So in such situations, you, you you do want to design for disaster recovery because then you will not be able to go ahead and depend on those systems to be able to say, for example, catch on the flights at aws.

We look at a shared responsibility model for data resiliency. For example, we are responsible for the resiliency of the infrastructure on where, where we run the applications, we are, we are dependent on, we we take care of the resiliency of the regions, the availability zones, the edge locations on which in and on which your systems runs runs on all the infrastructure components.

We are also responsible for the compute the storage, the databases and the networking, the resiliency of those and the hardware systems on which your applications run on. But this is a shared responsibility model. So where in our customers who write applications on the infrastructure that we provide, they are responsible for creating and the resiliency of those applications, they are responsible for configuring the backup plans. They are responsible for configuring policies that allows them to make the applications resilient. They are also responsible for configuring security constructs so that you know, your access to the applications are secure. It's a shared responsibility model and we provide those capabilities that customers can use to make sure that your data is resilient.

We have a robust storage portfolio consisting of block storage, object storage and files and customers can use native capabilities or capabilities provided by a backup elastic elastic disaster recovery and a resiliency hub to ensure that your applications are resilient. Like i mentioned, the first thing that you need to do is think through your rt o and rp objectives. When you think through disaster recovery and application availability, what are the key losses that your business can withstand? And based on that you have variety of tools available that you can use to create your disaster recovery plans, your application resiliency plans, you can use point in time snapshots. You can use backup and recovery scenarios. You can use versioning and replication as some of the mechanisms to be able to make sure that your data is resilient.

If you have more than one resources that you're using from aws, you can use aws backup as a mechanism to create a seamless policy to back up all your data within your application. Or you can always rely on third party products and our partner products running on aws to make sure that your application is resilient. But the key thing here is you have to make sure that you are able to test the scenarios out. You cannot leave that on chance.

So when you create your disaster preparedness, when you create that plan to recover from a disaster, make sure that it actually works at the time of need, it should be working, right? So you want to test, test and test to make sure that whatever plans you put in place are actually working.

Finally, you can always rely on aws well, architected pillars, right? We have a lot of guidance here as to how you can think through architectures to make sure that the application is resilient to any failures.

Now, as p ms as product managers and solutions architects, we spend a lot of time talking to customers and talking about real life scenarios where we have helped them overlay some of the capabilities that i just spoke about to make sure that the application is resilient for the next section. Let's do this. I'll invite biman and marcos to talk through how we have helped customers architect their environments based on the requirements that they have told us.

Thanks pa. So what p teed up really nicely for us is what the two key tenets of data resiliency are high availability and disaster recovery. We're gonna dive into how aws can help customers architect for a high availability scenario.

So for aws, high availability is one of the key design principles we have when we build our underlying infrastructure and we seek to eliminate any single point of failure and we do this by provisioning 32 regions worldwide with around 96 availability zones.

Now, our availability zones are designed to operate at single points of failure as well to to avoid single points of failure where operations in one availability zone will not be impacted by another. So customers can take advantage of this and deploy their applications across regions and availability zones to make sure that they are available and always ready, especially in those times where you fat finger or there's a disaster.

Marcos. I know you've talked with a lot of customers how they actually architect these.

Yes, one thing that's very important is once you are in a cloud environment, you you now have a set of additional resources, the flexibility and the scalability available on the cloud, right?

So if you think back on this slide here, you see a traditional data center approach, you see one data center or secondary data center in a co location facility. That's what we see some patterns on some customers that are using on premises infrastructure.

When we go to the cloud, think about each one of those regions that ban just mentioned, we are talking about at least three availability zones there. So we're creating a scenario where multiple points of failure are avoided, right?

And inside of those availability zones, there are multiple data centers as well. So think about your question and how customers usually approach this. Ok, I need to provide some type of availability. So it's recommended that you use that infrastructure. That's the backbone of AWS.

So use the multiple availability zones. So this is a traditional application that most of you probably are familiar in using that type of scenario on premises or on the cloud. And the first year here is just the port of entrance of the application, right? So thinking about load balancing or any type of routing mechanism that will provide the access to the clients inside the application.

So that very first step will provide a way of moving the workload within the different availability zones according to the workload. And also according to any type of failure or any type of resource issues.

The second tier is the tier that we provide the infrastructure for that application. So we're talking about the instances. And with Amazon auto scaling group, you are able to create policies that will make your infrastructure grow and shrink as needed.

So it's important also for workload, but also important for high availability because any time you have an issue with the resource underneath, you will be protected by the auto scaling creating new resources on the fly as you need.

And the last year on the bottom side of this light here is the data, right? So we are using an example here RDS is our managed database. So the idea of having a managed database on the bottle here is to be moving from you the undifferentiated heavy lifting of provisioning infrastructure for database thinking about high availability for the database.

As Paulo mentioned is a shared model. We'll take care of that for you when you're using a managed database like RDS. And in this case here, you can see that we have the database in the primary database in one availability zone. I have a stand by copy on a secondary availability zone and I have also a read replica.

Finally, sometimes you want to go beyond the availability zones and you want to go across regions again. Paulo mentioned the disaster recovery when you have a region issue like a natural disaster. But think about not just the unlikely event of a region outage, but think about the connectivity to a region.

Sometimes you have issues connecting because of network issues because of connectivity in general. You need to have an alternative to have your data somewhere.

So that's when you can rely on services that go beyond the, the, the the limits of our region. Things like S3 for storage or DynamoDB for database, EFS, which is our Elastic File System file system for uh uh global reach. So you can go beyond just the availability zones and start replicating your data across multiple regions. So that's what we see customers doing when they are looking for aaa high availability scenario.

Thanks Marcos and it's great to hear that. Customers are taking advantage of all of the storage classes that we offer as well um along with our regions and availability zones. So moving on to the second tenant that Po mentioned as a part of data resiliency, that is disaster recovery. And I'm sure most of you here have talked about disaster recovery with your internal customers, with your teams and wanted to understand what is the best strategy that you can have on the screen. We'll be going through four specific strategies for disaster recovery. And of course, the question that comes to mind is which strategy is best, but there's no right answer fits all. It really depends on your use case and what your acceptable parameters are for disaster. For each of these strategies. There's a clear trade off between cost, complexity and recovery time objective, meaning how long can you actually afford to stay down Pollock alluded to? We see customers actually lose millions of dollars for very little downtime. So you wanna think and talk to your application teams to understand what's the business impact of being down for five minutes versus five seconds versus five days. The answer is there's probably an application that fits into each of these categories.

Now, starting from the left hand side, we have backup and restore, which might be one of your more traditional DR strategies. Think of this as your lowest cost but highest RTO where you are having that trade off of cost. versus time to recovery, switching over. um moving a little forward to warm standby. If any of you come from the on premises world, you're probably familiar with this where you have a application stack provisioned in another region or data center at lower capacity. The idea here being that if you do have a disaster, you're able to fail over to that region or data center and scale up quickly. um obviously, there is a trade off here again. This does come up with a additional cost. But for some of those applications where being down for an hour is unacceptable. This is where you'd want.

Now, pilot light is a strategy that think of this as the cost optimized version of warm standby. And this is a little bit unique to the cloud where customers can have data replicated in another region, but don't have to have the full application stack provisioned. So think of this maybe for one of your back end applications which can be down, say for an hour. um and you're not gonna see a huge business loss and we'll dive into this a little deeper all the way to the left. You have multisite active, active and this is for your core business critical applications. Think of it, if you're running a trading exchange or a ticketing system, those two seconds that you could be down could amount to millions of losses. So you'd want to make the investment in having that more expensive DR strategy while uh having that higher RTO.

So diving into backup and first store, like we said, this is probably one of the more traditional DR strategies that you've heard of like where customers are taking backups of their data and storing them in, say a vault or protected area. We see customers use this typically for those non core business applications, the ones that can be down say for an hour or two think of your test environments or maybe your non critical applications like an HR software that doesn't handle payroll. We also see use cases for this type of DR strategy for uh ransomware recovery where if you do have a ransomware attack, you may want to recover your files that were not been corrupted. So that might be going back a week or two weeks and seeing that you really have that clean data to make sure you get back to, you know, your good data, thirdly not a disaster scenario, but customers can leverage just for compliance use cases where customers in say healthcare or financial industry are actually mandated by government regulations to keep their data for seven years. It doesn't make sense to have an active site, multi multi to keep their data around. They instead opt for backup and restore Marcos. I know you're gonna walk us through how some customers implement this.

Yes. So when we think about backup and sometimes in the discussions, we uh we are looking for disaster recovery solution some way of uh having our data available somewhere else. We always think about the latest and the greatest data, right? We want the last very last recover point uh available on the secondary side, but that not just the case in all situations. So there are situations that you need to have uh as B mentioned uh historical access to uh points in times in the past, right? So that's where we see the backup, not just for compliance but also for exa recover. Of course, there is a trade off. The trade off is the recover time objective. You take a little longer to restore that data and and it doesn't stop with the restoring the data. Think about it if you are restoring the entire environment in case of failure. For example, you have your traditional application here running with file systems with databases with instances and you have an issue. If you need to restore, this is a backup solution, it is going to be the most cost effective one. But you have to take in mind that you have to restore the data somewhere. And the restore is just the first step. Then you have to restore the infrastructure, make it available, set up configurations IP and applications link and everything else that you have to put in place after disaster. So that makes that solution a longer uh recover time objective. But for some applications that's uh suitable.

So think about the different um options that we are going through now, not just one solution for your entire company, but think about each one of the workloads, each one of them, they have different business requirements, they have different uh financial impact in kind of disaster. So the backup will provide that uh that, that recover. And in terms of RPO or the recover point, objective backups can go over one hour. Usually one hour is the the uh recover point objective of a backup. But there are solutions and there are services that can be backed up uh protected in a more often fashion like in seconds like point in time recovery or continuous backup. So there is a trade off between different services and different capabilities available for a backup solution. But have that in mind addition to additional to that scenario, think about also that situation where you need to have your backups or your data that was protected somewhere else. The famous offside backup that backup that in the old days you would put on a truck and send to a third party company that would put that on a safe for you for several years. So with the cloud, with the ela elasticity and the flexibility of the cloud, you can take advantage of uh storage and uh backup solutions on AWS to do that for you, not in a uh a bunker somewhere else but a secondary region. So a backup solution like a w backup. We will give you the option when you're creating the backup plan to create a secondary copy of your backups in a secondary or tertiary or multiple regions according to your business requirements. So this way you can restore your environment over there. Of course, we are talking about a long time of restoring that data, but still you have the data, you have the multiple points in times and you have the capability of doing that off site as well.

Great, thanks Marcos. Now coming into pilot light, we talked about this a little bit on the first slide about this being a cloud native type of DR strategy. Pilot light is great for those semi critical business applications where you can expect an RTO from about 15 minutes to an hour. So you're not down for too long, but there still is that little bit of lag time like we mentioned before, the pilot light is where your data is replicated in another region. But those critical sorry, the noncritical components of your application stack might not. So therefore, if their disaster did occur, you would need to spend say 15 minutes to actually provision those other resources and scale them up so that you could fail over that being said, this is a great DR strategy for companies or applications that can be done and not have that business impact of being offline for an hour. I mentioned an HR system or perhaps you have internal tooling or metrics that you can not have for a little bit. This is great because it is cost optimized and we continuously hear from customers how can we better manage our cloud cost and optimize? So when you go back to your business units, definitely ex explore this type of option.

Yeah, in this type of scenario is when yeah, you are looking for the latest uh recovery point. So you want to have some something that will provide that uh data on the secondary site as soon as possible and the latest data uh and it's not about uh which application is more suitable for pilot light or for backup is what's the financial impact. So it's a trade off the the the the bigger the impact the financial impact i have on the down time of application. Uh the more i have to consider a solution that will provide me a shorter RTO recover time objective.

So this here is very common pattern that we see customers that apply uh applying with you using a pilot light scenario, right? So if you look to the your uh left hand side of the slide, you see the production environment, it's running as the scenario that i showed before uh load balancing in front of the application, in front of that load balancing, i have a route 53 to route the data according to my business requirements. In this case here, for a pilot light. What i will do is drive uh all the i os to my pri uh primary site as an active and in case of failure, i will switch over to the disaster recover and i have the load balance on both sides to do that. Uh uh um uh redirection of the io for the different availability zones.

One thing that you probably noticed is i have the same infrastructure here on the production side. On the right, on the left hand side, i have the ec2 instances, i have my outs scaling groups that will grow and shrink as my business need. And also i have my database that's in multiple availability zones. What i have on the right hand side, on the disaster recover side uh on my inactive side, you may think that's, that's exactly the same but similar, but it's not the same. And the difference is i'm not deploying the instances here. So i don't have easy two instances running on that environment. I i just have a read the replica of my database. What i'm trying to do here trying to keep the cost as low as possible because i have the tolerance of spending a little extra minute, some extra minutes recovering that environment. So i have to spin up uh the instances i have to promote the re replica to uh production or to a primary instance and they start using that that can take is more than 15 minutes so if my application can tolerate that, that's a good way of saving costs on the disaster recover pattern, you just load the data, you just load the resource, just speed up the resource when you need it and you just charge it for that when you need that.

Thanks Marcos. Moving on to warm standby, things start to heat up as we move down the the image. But as Marcos mentioned, warm standby is essentially pilot light but with more application components provisioned. So in warm standby, you are having a secondary set of your application stack provisioned in another region, but it's running at a lower scale. So you do have that increased cost um that you wouldn't have with pilot light. But what you gain instead is a lower RTO. So think of this for your core applications that can have about 0 to 10 minutes of downtime. When you think of standby, think of when you close your laptop, when you open it back up, it's in standby mode, it might take a minute for the screen to come out. But once you're in, you're working, the key difference here between war on standby and pilot light is that you do not need to provision any of those ec2 instances or low balancers. When you spin it up, it's already there. And traffic can automatically be redirected to that region where you have that little lag time of about 10 to 15 minutes is for those resources to scale up. So you can uh actually support higher traffic. Again, this is great for those types of applications where the financial impact of being down for five minutes is greater than the cost of actually keeping this warm standby site.

Yeah, so this is this is a quite similar to the scenario that you had before where you had the pilot light

So we have this the infrastructure running on the secondary site, but you probably notice the difference. Now, I have the resource available. So I'm provisioning the actual research that I need. And of course, I've been charged for that, but it is an application that requires a faster recovery after disaster. So I can afford to have those research available. So this is quite common in different industries. You see a lot of uh retailers or banking on the financial industry having some uh most of the applications running a scenario like this because it's not an active, active. So we're not paying for the full power. We're going to talk about that in a second, but also it's not something that you have to, to take hours to rest to us in case of a disaster. Uh and you can take advantage of the availability zones that you have across each one of the, the the different environments and you have the load balance in sending the the request to different uh different scenarios. It will take a little less time but it's going to cost a little more.

Thanks Marcus. So now coming all the way to the right side of the spectrum or yeah, the multi region active active. Now this is obviously the highest cost type of disaster recovery strategy. Given that you are running two active regions with your data and resources replicated. Customers typically run this in an active passive strategy where they're routing all their traffic production traffic to one region and have ad r region. Therefore, in the case of a disaster, they could easily flip over to the disaster region and have zero downtime at all. Now think of this is obviously very expensive because you're running two regions simultaneously. But for those customers that cannot afford even a second of downtime, like we've mentioned, think of those trading platforms where millions of trades are happening in microseconds being down for a minute is just unacceptable for the business and can result in losses of millions of dollars. Or think of your black friday cart. I know if I put something in my cart and the cart times out or something goes wrong, I typically abandon that at scale that can also lead to millions of dollars in losses. So we see customers use this in those key critical business situations where being down, even for a second could have millions of dollars of financial impact or even larger regulatory ramifications.

Yeah, when, when you talk about the those type of requirements, uh and we, when you go sometimes talking to customers about what are the uh recovery point objectives and recover time objectives for each one of the workloads? Uh we usually have most of the time we have two more common answers. One is, I don't know. And that's a good point to go back to the business and identify uh the trade offs of the cost of a down time of an application. And the other common response is 00, irpo and rt o of zero. Is that possible?

Yeah, is, is it is possible but it's going to cost you the, the fact that you have to have all the infrastructure available twice. So this is a typical pattern and very common on the financial industry, on the on the trading industry as well. So we have now our uh route 53 they are routing the the data but it is not routing the requests just for one region anymore. Now, I have a situation where I have two or more sites that are active. So I'm pointing to those different sites and now I have the infrastructure, the instances uh in the in the load balancers all uh speed up already already on their full capacity in order to, to fulfill the same type of performance on both scenarios, right?

And one key difference here is now we're talking about uh something that can go uh uh uh global or beyond. Just a couple of availability zones. And uh for that reason, I i repla uh most customers we see replacing, for example r ds for dynamo gb due to the fact that dymo gb has the global capability of uh multi site uh access. So you can provide the same database, the same name space for the, the the applications and they can write uh both sites at the same time. So this is one of the advantages. Also if you want to use the storage, you can use, for example s3 to have that sa same type of capability of replicating across multiple regions uh through uh crrs through replication. So this is a common uh pattern uh when you need uh zero rt o and rpo.

Great. Thanks Marcus for walking us through this honestly, very real customer situations that we hear all of the time. But I wouldn't be a good PM for AWS Backup if I didn't double click on our service and show us a way of how AWS resources and services today can help you enact these DR strategies to effectively protect your data.

Now, AWS Backup is a fully managed policy based service that will allow you to centralize and automate your data protection across AWS. So resources and hybrid resources, not only do we manage your backups life cycle policy and scheduling. We also have capabilities that will allow you to report and manage the analytics of your backups so that you really know what you're protecting and how it works.

We've mentioned obviously other capabilities today and other services that help you enable your data resiliency like S3 or EFS that have that high availability. But the key difference here with AWS Backup is that we allow you to manage it in a centralized location across multiple AWS resources. And we do continuously hear from customers that one of their biggest pain points is having to manage their data protection in silos, where they're having to manage their data protection for the cloud, their hybrid resources or even across their AWS resources.

And then when you think about where this actually fits in a customer's application deployment on the left, you have your typical on premises infrastructure where customers are running their applications on their data centers and using backup vendor software to back up and manage their data protection and storing it in their data centers themselves. Now, moving to the hybrid situation where we're seeing a lot of customers move to this where customers are running their applications in their own data centers, but then wanting to store their backups to a cloud target. And then finally all the way to the right where you have your cloud native applications where customers are both deploying and storing their backups in the cloud AWS backup here works very well for your hybrid and cloud native situation where customers, as I mentioned, can set a centralized policy across all of their AWS resources and some of their hybrid resources leveraging storage gateway to have a consistent policy and have peace of mind that their data is protected without having to do it 100 times. Because we know with repetition, there can be mistakes.

Sometimes you, you are using native services to protect your data. Like RDS provides some type of mechanism or EBS provide the snapshots that some of you may be familiar. But think about doing all that from a single dashboard. So you automate that task and sometimes you are even using the services native data plan to that. And then you are able to do that from a single point of view for all those different services. So customers are looking for that ease of use and that's why AWS Backup was created.

Now, Pollo I'd love to bring you back up so you can talk through some of the use cases we do here. Our customers use our service for.

Thank you, Bosman. Um like Ban mentioned, uh the very first use case that customers think about AWS Backup is cloud native backups, right? You have multiple resources and AWS that you're using. You want a systematic consistent policy based backup scheduling mechanism that gives you peace of mind that your backups are being taken on a regular basis. You come to AWS Backup, you configure that simple policy, simple backup plan that will allow you to now schedule your backups automatically, right? But that's not the only use case, we find our customers using AWS Backup for compliance and governance reasons as well. It allows you to make your backups immutable, right? And you can then report on the status of your backups for regulatory obligations. Of course, the backups are used for disaster recovery. We talked about that in detail. Uh depending on your RT and RP objectives, you can use a backup and recovery strategy as ADr strategy. And since we make our backups immutable, if you run into a ransomware scenario, you know, your data is corrupted by a malicious sector, you can certainly be confident that you are able to go back to the version that, that had the correct uncorrupted data and restore to that version. So these are the use cases that resonate well with customers and our customers use AWS Backup for those use cases.

And just to that point, we do have over 100 and 30,000 customers currently using our services. And this ranges really from those small start ups that are really cloud native to those large financial services, healthcare, uh retail cust custom companies that are serving their own millions of customers and have that data that they need to protect. Overall, we've helped them back up over 1.8 exabytes worth of their application data all stored in that immutable fashion in AWS.

Can you show hands how many of you are using AWS Backup currently? Ok. Half the room that's great. Very exciting. So for those that are not using yet, so we invite you to take a look on the dashboard on the, on the, on the console and start doing your backup plans and creating if you are protecting your resources already with RDS with EBS, it's a no brainer to just have something automating the task for you. And that's where AWS Backup will play a key hole for you.

All right. So keeping all this in mind, we are here to help. We have AWS Well Architected Frameworks that will help you on your journey towards resiliency. Uh we expose the same operational excellence that we all use internally to our customers so they can build their applications using those principles. Of course, security is job zero and AWS and we expose constructs that you can utilize to make sure that your applications are secure and the access to the application is also secure. We spoke a lot about reliability in the session on how we can use disaster recovery scenarios and high availability constructs to be able to make sure that your application data is resilient. Um we also provide you with constructs to make to make sure that the application is performance. It's not about having the highest performance. It's making sure that you have the performance that is required to make sure that the business can run smoothly and efficiently. And you do do that in a cost efficient fashion with the constructs that we expose.

You can continue your learning by going to AWS do training slash storage. You can certainly make a plan using AWS Skill Builders. Uh you can also utilize our wrap up guides and as you continue to learn, you can get some more brownie points through your digital badges. So you can actually check your knowledge and own a digital badge as well.

Now, this is gonna be an exciting week. We have many, many sessions um that are going to follow the session. Um just within two hours, Marcos is again gonna be on stage talking about data protection and resiliency, um and safeguarding and auditing your uh data protection with AWS Backup. He'll walk you through how simple it is to create a backup plan to enable your backups for your resources going forward. We have more sessions on ransomware, data recovery with AWS Backup. We also have certain chalk talks, talking about compliance governance and ransomware recovery and the new capabilities that we are bringing out. Moreover, we also have a code talk, right? Which is kind of very interesting where it's not about slides. We are going to talk about actual code and how we can use those code snippets within the new environment. So this is like real life codes that you can actually implement.

I'll keep the slide on. Please take a picture. Please do go to all the sessions. We would love to have you all there um going through the different things that we are bringing out at this, this session at this conference that is all from us. Thank you so much. It was great to have you all and we'll take questions now.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值