Cloud security, data responsibility, and recovery: 10 key strategies

Michael: Hey, hey, everyone. Uh thanks for coming to our session. I think it's the last one of the day before, uh, you get to go and do some more fun stuff in Vegas before the party. So yeah, again, thank you for coming. I'm Michael Cade. I'm a Field CTO (posh title in front of the customers) technologist though, I've been at Beam for 8.5 years.

How many people have Beam customers or know Beam from the virtualization space? A lot of us, right? Um so our focus for the last 16 years has been around backup, backup of data, recovery of that data as fast as we possibly can. And probably about five years ago, we started seeing a lot more data services being, being built into, into AWS into the cloud.

So what me and Julia came up with for this session was to take that seven Rs that you've seen from AWS and Julia will go into that in a little bit more detail, but look at it from a 2023 point of view and three of the areas that we should be considering now as well as the the additional seven.

So Julia I'm joined with Julia. Julia, why don't you introduce yourself?

Julia: Yeah. Sure. Hi, everyone. Thank you for being here again. And my name is Julia. I'm a Global Technologist at Beam. I'm also an AWS Community Builder which means I'm very involved in AWS. My product specialty is Beam Backup for AWS. So it's a pleasure to be here talking about AWS and migration, cloud, etc.

Michael: Yeah. Do you want me, can I?

Yeah, so we're gonna talk today about data protection, cloud security. And before that, a lot of customers that we know they are migrating to the cloud. So we wanted to start with the seven Rs and before it used to be six Rs in 2016, that's when AWS started with it and then they added an extra R.

And um but before thinking about migration, you have to think about a few things. You have to plan it before. So you have the discovery phase which is, you know, doing an inventory of everything that you have. All your data, your applications, your workloads, then you assess and prioritize what you want to migrate first while you don't want to migrate, what doesn't need to be migrated and then you determine the migration path and you can have more than one path for different apps, several organizations, they use more than one migration path and it's totally ok. But that's why it's important to have that to plan it ahead and strategize first.

So two of the, the Rs that they, they are called migration Rs but are not really migration r one of them is retain and the other one is retire. So retain is you have your applications on prem and you might wanna move them in the future but not now or you don't wanna ever move them but you wanna still keep them, we call them sunset apps. So you retain them in your environment, you don't move them to AWS.

Then the other one is called retire or decommission is you don't want the application anymore. You don't need that service, you can shut down the server and then you just, you don't want anything to do with it and you terminate it. So these are two Rs called migration Rs as well but not really migration.

Michael: Yeah, I think, I think there's also this point just before discovery, which is the C level "we need to go to cloud because cloud is cool." We've all probably had that conversation and I think the other part there is not everything fit for the cloud. Like we've probably got servers that have particular cards in or particular hardware that doesn't allow us to lift and shift that into, into the cloud. And that might be why we've got to keep it on premises or in our existing data center.

Now, obviously, from a Beam point of view, we have the ability to protect those work loads wherever they are, whether they're on premises in virtualization, whether they're physical servers, all of that good stuff from that retire point of view. It could also be that actually we don't need the operating system anymore. I'm just, I'm just going to move that database. I'm just gonna move that unstructured data into the public cloud as well that we're gonna, we're gonna get into.

So also in terms of retain maybe for compliance reasons or security reasons, you can't move to the cloud. So you just stay where you are on premise and that's ok. Like we said, there are different paths for different applications, different services.

Then the next one is called uh relocate or also called uh relocate, but it's mostly for VMware. So you're using VMware on prem, you have VMs and you don't want to have to learn everything about AWS. Not right now, maybe in the future, you don't want to turn your VMs into EC2s. Having to create a, you can VMware has this joint solution with AWS, which is VMware on AWS, VMware Cloud on AWS. The name says it all is just move your VMs onto AWS and you can still use those native VMware VMs on AWS without having to relearn everything.

So you keep using them on AWS and then start benefiting from all the the cloud benefits. And in the future, if you want to move to, to change them, that's fine. But that's how it works for, for these relocate.

Michael: And I think once that happens, one is the skill shortage. I think everyone in the room, we're expected to know everything about everything at the moment in this IT world and that's difficult. So by we've probably spent a lot of our, our IT career already learning that VMware world. So let's just let's transition and take that in from a Beam point of view. We work pretty much the same up there as well. So again, you don't have to go and relearn a different way in which in which to protect that as well. Um and I think that's a key differentiator and then you can always start consuming AWS native services at that point you're now in the cloud, there's a lot of um solution briefs around using VMware on AWS with native cloud tools such as RDS such as DynamoDB, um and hook it in and taking advantage of those, those cloud based services up there as well.

So now the next one we hosting are also called lift and shift. This is the most common one that we see. It's basically you lift your workloads from on prem and you shift them on to AWS. So like we were saying, the VMs, you turn them into EC2s with an AMI and you can automate that you can use tools like V2C to automate that just the moving your workloads. Like it's a click of a button to move them on to AWS or if you like to work, you can do that manually and move them to the cloud as well. But this is the most common that we see. And yeah, basically what the name says, lift and shift.

Michael: Yeah, I think, I think from a Beam point of view, like I mentioned at the start, we focus on backup and recovery of data regardless of whatever platform that is. And that's great. But we also play a huge part in that migration story. We have the ability to take any image based backup and relocate that or migrate that into an EC2 instance. And as you go through that there's a wizard driven, but there's also an API that you can take advantage of is you go and choose. Well, what, what type of EC2 instance, what security group, what VPC do I want them all to be in and then we do the lift and shift.

And the uh the fact is we can store those backups in object storage S3 in Amazon already. So that the law of physics isn't, we're not waiting for hours to, to push that data up. If you're, if you're already doing that, there's a, there's a lot of portability and mobility of that, that data when it comes to relo relocated uh sorry, re hosting, reho there's a lot, there's a lot of hours to go through. Yes, exactly.

Julia: Next repla so similar to lift and shift. But you want to add some enhancements, you wanna upgrade some the OS or the database, you also lift your workloads, you move to the cloud but as you move, you add some enhancements. So this is the second most common case that we see also you can use V2C to do that automatically and we see that it, it adds more benefits because by moving to the cloud in ending this enhancement, you get more of the benefits from AWS as well.

Michael: Yeah, I think the most common one that we see here is around database servers maybe in, in years gone by, we would have had our database servers running in virtualization running on physical machines. And now do we really want them in EC2 instances? Now some DBAs, how many DBAs are in the room? Cool. Right. I can't offend you then. Um oh, I try not to offend you.

Um but being able to lift and shift that into a PaaS based service such as RDS or like from an EFS point of view, let's take our unstructured data and let's offload some of that responsibility to, to AWS. And yes, you're gonna pay a premium for it. But now I don't have to, I don't have to worry about keeping the operating system patched up, update, et cetera.

So we're seeing a lot of that database migration, whether it's Postgres from a VM into RDS or MySQL into RDS, but also Oracle and, and uh Microsoft Sequel as well up there. So I think, I think this is probably the, the, the more exciting way of migration path. Um just because it gets you into more of a, I can now spend time to actually enhance my application rather than worrying about patching and updating the operating system or keeping the lights on.

Now. What, what goes against that from a DBA point of view is that maybe you lose a little bit control or a little bit of control of that, that database instance at that point, the, the tweaking of, of said parts of that database. But that's the, that's the uh the way way up that, that you have to decide on on that.

And again, from a migration point of view because we're, we're taking application consistent image based backups. We have the ability to be able to restore those databases in a consistent fashion back into wherever you need it to be. So that could be RDS, it could be from a staple set in your Kubernetes cluster into RDS. It could be taking RDS and putting that into a staple set to give you more control back around that, that Postgres database, just all examples and we just moving data is, is not no easy, no easy feat. So that's uh that's where we come in to help provide that efficiency of of moving that

Next we have repurchased. So imagine you're using some sort of service on prem and you want to use that on AWS. A lot of them, they have joint solutions with AWS and sometimes you have to just buy a new license to use them on AWS. And I it's again, you lift those workloads and you move them to AWS with new license and you can still use those services on AWS with Beam for instance, you don't have to buy a new license.

So even if you move your workloads from on prem or from another cloud provider to AWS, we have what's called a Beam Universal License. So you can use the same license that you were using before on AWS. So it depends on the vendor, but it's all about like keeping what you were using on prem or on another cloud provider and when you move to AWS, you still have that same type of service.

Michael: Yeah, I think there's two trends that we're seeing out there in the field today and one is lifted and shifted into SaaS based models. So how can I really offload a lot of that control off to a Salesforce or off to a Microsoft 365 from a collaboration perspective. So really offloading that, but then also think about your service providers, your MSPs that are offering infrastructure and service, but they're gonna now look after everything for you rather than you having to spend time again, patching, updating and then extend that out to backup as a service.

How can we offer, how can these MSPs take that, that responsibility of that data? And you pay a pay a uh a recurring uh subscription to be able to look after that, that workload. And then the last one, it's called refactoring. It's the most cumbersome one, but the one that you get the most benefit from because you totally revamp your application.

So you re architect it to be called native. Everything that you had before you change everything from the back end, the environment, the architecture, the infrastructure to be called native. So using AWS services and you get the benefits directly right after you migrate to the cloud,

I think this one we see companies doing but it takes more time to migrate because you need to have the experience, you need to understand how the other services work to set that up, set them up for success. So they can work go well on AWS because what works on prem does sometimes doesn't work on AWS.

So I think I think most of us in the room will have remembered that physical to virtual type migration that happened. Um uh there is no, there is no physical or virtual migration into containerization or into Lambda functions or step functions. Um so this is why that we've got that challenge. There's no P2V converter from that perspective. And that's why I think there's no easy button, which means we haven't had that compelling reason. Get into that, that containerized environment.

How many people are running some sort of Kubernetes containers in their environments? Yeah, so, so uh not, not many of us but some again, from last year's session where we saw probably half the hands go up less than what we just saw. Um and I think obviously, whether you're using serverless, whether you're using containers, whether you're using Kubernetes or any cloud native service, there's still going to be some data attached to that somewhere. Whether that's EBS, whether that's a virtual machine, database or a data service, some observability metrics that we need to keep for regulatory, there's still going to be some sort of data somewhere that we need to protect.

I think if you get the chance, if you're still here, if the show floor is still open, there's a um an awesome stand from the AWS guys called Servius Serve us land. Um they're the guys that have uh they've got the coffee machine, you, you log into it and you uh you order on your, on your phone and then it goes through um a step function to get you. that what I mean is that's still using a DynamoDB at the back end

That's important data. Um, it doesn't matter whether we get into cloud native or whether we stay around virtual machines, EC2 instances, we still need to think about how we protect that workload and that's some of the stuff that we'll get on to.

And just so you have an idea of how complex it can be, I like to compare these areas you're trying to change the operating system, the seats, everything from an airplane while the airplane is on air, fully loaded with passengers. So your application is still working and it needs to be in the best state and you try to migrate to the cloud so it can be hard, but sometimes it has the benefits from being fully cloud native.

And before I move on, just there are three less steps after you migrate to the cloud, which is validation - so testing everything to make sure it's working on AWS, then transitioning and then you can move it to production and uh just monitor, make sure everything is working fine. Uh that's sometimes that's not the easiest part. Yeah, we make the diagram look very easy, right? Because also I think what we see is maybe we did relocate or rehosts but then we look to refactor or repurchase as well. So there's a continue, we didn't have enough slide room to be able to put everything on because this is a continuous effort to be able to keep evolving with all the new stuff that like the new 50 new services that AWS brings out a year. Well, let's take advantage of that and use that. But so it's a, it's always gonna be a progression.

Whereas I think we come from a world where we build a physical server, we give it a name that hopefully it's not something from the Simpsons or something like that. But then we progress into virtualization. We start using things like infrastructure as code. We start using configuration management and we don't really care about what that name is and we don't actually don't care how long it lives. As long as it serves, its purpose scales up, scales down accordingly. But we're going to have that continued focus on evolving and using the services available for the for the job in hand now.

Great. You're on the cloud. What's next from a security in the cloud and a data protection point of view? There are a lot of different things in on AWS, on the cloud that are different from on prem that I mentioned. And at Veeam, we love talking about data, we protect data. Data is our main focus and we run some reports every year. So we have a ransomware report, data protection trend report etc. And we've seen that companies are getting attacked by ransomware attacks more than ever.

So last year, we ran this research and we saw that a 5% of the IT was almost 2000 companies that were part of this study were victims of ransomware attack and they were victims more than two times. So now it's not a question of if or when it's a question of how many times are you going to get attacked by ransomware or another cyberattack? And what, what else we've seen, we've seen that these attacks are becoming more common, more sophisticated and the attackers are targeting backups, the backup storage because they know that's where the treasure lies.

So we have to, this is it, it's a AAA job that we have to do together against these attackers and work together. So that's why we're all here in this room to talk about data protection because it's very important, you know, to find out more about these tools that we can use to become stronger against them. So do you want to add anything?

Yeah. So the only thing like, so obviously ransomware, cyber security is a massive, massive threat and very topical. You saw the show floor full of security vendors doing, doing the good stuff left of bang the bang being the the bad thing that happens. Um and they're doing that preventative way in which they want to fix that right of bang being more us about remediation. But I think it's also accidental deletion. How many people in the room have made a mistake and deleted a file, an email, uh dropped the database. I've done that as well. So we all make mistakes as well and ransomware might be the top one from that report. But accidental deletion is not far behind it, for those that would actually admit to it as well. But I'm pretty confident that all of us in this room have made a mistake at one point, whether it in IT or just in general as well.

Let's get on to the uh the three new Rs. So yeah, with that, we came up with three new wars, ours, which is responsibility resiliency and recovery. So for us, your move to the cloud comes more modernization and all the good things. But you need to think about these three new Rs.

So starting with responsibility, who knows here about the shared responsibility model with AWS? Ok. Not everyone. And this is what I see when talking to people in the community is that a lot of people, they don't know that there is a shared responsibility model with AWS and also all the older cloud providers. This is on their website, this is on AWS's website. And what does it say? It says that the responsibility for the security and the resiliency of the cloud is AWS's so they are gonna take care of all the infrastructure and the high availability, those 11 nines, it's up to them. But the security in the cloud and the availability, the resiliency in the cloud is your responsibility.

So all the data, the applications, everything that you put on the cloud is your responsibility. So if ever something happens with your data applications, workloads, it's your responsibility. They are not going to do anything for you and people, they don't know that they move to the cloud, they think, ok, now I can forget about it. Let me do something else. No, they need to keep managing it and taking care of it and monitoring it. We're going to talk about the things that should be done. But, so, so this is my 4th or 5th AWS re:Invent. And I think I've shown this slide every single time. So, but it's, it's fantastic to see the amount of people putting their hands up and have that awareness that that data is yours. Whether that's, it's your data to like when you drop that database, there's no one to ring. Um maybe it's the DBA or maybe it's the backup team, but it's your responsibility. AWS are not going to bring that back for you. And they clearly put that this is from their site and they will a like they absolutely um agree with what they're doing. They're gonna keep you up and running, they're gonna keep the services there, but that data is your responsibility.

So now talking about resiliency, resiliency, it means you need to be able to keep running and in case of if anything happens, you need to be able to get back up and running as fast as you can. So, resiliency, I, I don't know if you all, you probably all know about the well architected framework, it has a few pillars and one of the pillars is reliability and inside reliability comes resiliency. So, resiliency is a very important part or of one of the best practices when architecting on AWS and it means that you need to be to have the capability of recovery if anything happens, a malicious attack or accidental deletion, anything.

So this is important and there are a few, a few things that you can do to be more resilient on AWS starting with the 3 to 1 rule. This 3 to 1 rule is a backup best practice that says that you need three copies of your data. So one of of the copies is your production, copy, your production data and two extra copies to avoid a single, single point of failure uh in two different type of storage of medias. So it can be um one snapshot on EBS and one on S3, Amazon S3 and one of them needs to be off site.

So I can show you here. This is a diagram and it's basically this. So you have uh an RDS instance and two instance and then you have the copy the snapshot of each, then you save them on to uh S3 object storage and you send them off site to it can be, you can use J reputation, send them to another region or even better send them to a local storage on prem or send them to another cloud provider.

So this is important. 3-2-1, don't forget 3-2-1 uh backup pro. And, and I think the one thing to add here is where Veeam sits in the public cloud from a marketplace deployment where we, where we come in to protect that in this instance. Here we have different regions, different VPCs, this could be across different accounts but even more resilience. And I think the important thing to note is that resilience isn't the same as avail Amazon have some fantastic features and functionality across their products that provide high availability of databases across different regions, all of that good stuff and absolutely take advantage of that because that's your front end, that's your mission critical data. But if you make a bad mistake or bad things happen, mistakes were made, you're gonna replicate that, that bad thing to that other other site, that other region, that availability zone.

So where we're coming in is providing that resiliency across accounts across permissions IAM. And as well as across all of your application, that backup is how we get you back up and running as fast as possible when those bad things happen.

Now imagine if you have everything in one account, your production data, your applications running and also all your backup. If ever, unfortunately something happens, a malicious actor gets in your account, the probability that they will be able to encrypt or delete or do corrupt your data is almost 100%. That's why we also have this notion of logical air gap that you need to separate those accounts that they so have one account for your production data and one account for your backup data.

And here we have also another diagram showing again that same 3-2-1-3 copies, 2-2 medias and one off site and also having that logical air gap in another account as well. So your backup should reside in another account.

Yeah, it's just, it's just all about privilege. How like we could, everyone could put everything in one AWS subscription. But how's that? That's not gonna give you that reliability. It's not gonna give you the resilience against an attack on that account. So yes, we can have MFA we can have all of the security credentials and, and stop in blocks to get us into our AWS management console, CLI et cetera. However you integrate or interact with your AWS account, but there's still a chance that we could, we could fall foul of some malicious activity and it might not actually be an external threat. Imagine someone within your business actually becomes the malicious actor within there as well and has access to some of it. Exactly that then in the and again, continuing resilient, how you can be more resilient comes in your ability.

So imagine you have your two separate accounts and then uh a malicious actor, it can be external or internal get into your backup account. If your data is immutable, they won't be able to do anything with that data. So by turning your storage, your, your backup immutable, you put that in a warm state which means write once read many you put in there and then no one can touch it. No one can delete it, no one can modify it. No one can encrypt it, not even the root account, but for the period of time that you set it to be immutable, no one can, can touch it. So it, it's another prevention against all these attacks and um you can do that very easily. Uh Amazon S3 has the S3 object lock and we use it, we use the API at Veeam. So it's just a check box. When you're doing your backup, you want to turn them immutable, you just check that box and it becomes immutable for the period of time that you choose.

And it's another layer of protection against all these attacks, I think. Yeah, I think I'll mention something on the next one because as much as so this is everything here is on AWS. But realistically, we're probably still running other services somewhere else as well. So being able to send our virtual machine or our physical backup uh yeah, our physical servers also into that same S3 bucket with that immutability lock, with that object lock API and the version it enabled on that are Cinetis back up. That might be in Red Hat OpenShift might be in ROSA might be in EKS, but they also might be in a, a another, they might be on premises. I wanna be able to send all of them into that off site location to provide that resilience against something bad happening in that whatever that source location may be. And that could be our collaboration tools, our SAS space workloads as well such as Microsoft backup. Uh yeah, Microsoft uh 365 as well.

And with that, we've came up with a new rule. Uh a more enhanced rule than that 3-2-1 that I've told you about, which is the 3-2-1-1-0. So besides the 3-2-1 that we, we already talked about, there is this extra one, which means your data, your copy should be a flying air gapped or immutable. One of those other things that I just mentioned and the zero we're going to talk about in a few slides, but it shouldn't have any errors. So you should be testing your backups to make sure that it's not corrupted that it's the right data that you want to. If eventually you want to back up, it's the right data from the right period of time because sometimes again, we see people they don't test the backup and then comes the time when you need that backup and it's not the right file. It's not, it's corrupted. So again testing and also with them, we have an automated testing that Michael will talk about after.

Yeah, I always think this looks like a zip code but the 3-2-1 rule and that that's not something that been brought up think of that as a methodology. I mean, if everyone in the room or anyone watching the video um takes anything away from this whole session that 3-2-1 rule that three copies of your data on two different media types and one of them being off site, just if you can implement that, then you're gonna save yourself when bad things when bad things happen. The one like like Julia said, making sure that that is in that immutable fashion in that object lock uh in that object lock S3 bucket uh in a different location off site and then making sure that you test it. Now, we have automated ways in which we can help test those backups as well within like you'll hear us talk about shore backup and other sh replica as well as well as other tools that we have within the Veeam suite. Um but we won't, we probably won't have time to get into all of that today.

So that then leads us on to uh a, obviously, we're focused on moving data from a to b wherever that's the, the and we're pretty good at it, right? In terms of compression, de-dupe, space savings cost efficiency, all of that good stuff.

The other thing that we're f key mainly focused on though is how fast we can recover that data. But we are a backup company like I don't want to pretend that we're a security company one little bit. Yes. Are we involved in the security of data and providing that capability that resilience of that? Absolutely. And we're going to, we just announced a couple of weeks ago, integrations into security vendors like Sophos for example, but equally around other like key management services that are going to provide or allow us to provide KMS type solutions within, within our our product as well.

Again, that kind of goes back to our swim lane is about backup recovery as well as that mobility of data as well, that freedom of data and how quick we can recover that and you can factor in disaster recovery to it as well, but we're not going to create our own key key management system. There's a lot out there aws have have a pretty good one. So let's integrate with that and we can take advantage of that from an encryption point of view again when it comes to our backup.

So not only are we storing those backups in ad duke compression fashion, we're also going to stick an encryption on all of that our own encryption. What is it that Rick says if you don't encrypt your backup, someone else will. So, so but leveraging best of breed tools to be able to provide that um is a, is a key part that we should be thinking about as well over that, that whole seminars that me and Julia covered, we have to be thinking about the safety of that, of that data.

Um and here is a diagram. Yeah, using, so you can see again the two, the RD and using AWS TMS for the those encryption keys. Uh and also not, no, don't forget to always rotate those keys cause cause just leaving them there and forgetting about it, about them. It's not the way to go about it. You also have to take care of it. So you know, rotate the keys and, and, and use those services.

Yeah, which then brings us on to the principle of uh well, yeah, the, the principle of least privilege. So making sure let's not use domain admins like we maybe did 10, 15 years, we would account for everything for everyone. Yeah, let's, let's be, let's be really granular about what that IAM permission needs to. It's very easy to go. Star dot star in an IAM policy. I have done it but you shouldn't, you shouldn't. Um but granular IAM permission. So everything that we do from a ve backup perspective, whether it be I'm gonna use an S3 bucket. Ok? I'm gonna create an IM policy and a user for that and I'm only gonna give them the ability to do XYZ, whatever that needs to be up there, I'm gonna do it on a service by service basis.

We should not have a backup, IA role that allows us to do backups, recovery snapshots, deletions that's going to end badly across all of our different services in AWS. Let's have one for each one because we want to be able to provide that least privilege and we integrate into that. IAM, we use that IAM to be able to make sure that we have the permission to do it. And as we go through the wizard, you'll, you'll hopefully see that we, we hit that check permissions and it says yes, you're all good. You have all the IAM access that you need.

Um so we wanna work with that. We don't want to take away, we don't wanna, we don't definitely don't wanna open up the door or leave the key in the door so everyone can come in. Yeah, and not only the IAM roles, but also implementing our back. So depending on the role, have just the permissions for their role and no more, no less also implementing MFA is the basic of the basic to have implemented and we have our back for all our services depending if you're just doing the back or if you're doing something else. MFA as well. So all those are part of the principle of least privilege, which is where the zero trust security stems from.

Right. Exactly. That and then that brings us on to the final line. I feel like I've already said it about 25 times is around recovery. How do we get that data back when bad things happen? And do I just need to bring back several virtual machines, several databases or do I just need to bring back individual files, individual parts of a database? And that's really our, our key focus there from a recovery point of view.

And then how do we test that? How do we ensure that those backups are in a good state? I'm not gonna read all of the, the words on there, but I wanna make sure that the database is being backed up correctly, which is that zero from the 32110? Exactly that. And I want to make sure that that VM or that EC2 instance actually fires up and actually serves the data that I need in an isolated uh security group in VPC. So it's not affected our, our production environments.

We want to make sure, no, we want to make sure that we can also do that in an automatic fashion because if every time that something happens, you have to go there, manually, restore, recover, it's going to take you a lot of time and you don't want that. So automating as much as you can to save you time, that's what we want as well with them. And us when you're moving to the AWS because moving to a lot of companies move to the cloud because they want to save that productivity.

They don't want to do that on differentiated heavy lifting. So, automating as much as you can is important. Yeah, absolutely. So, I think when we first came out 16 years ago, focused around virtualization back up APIs weren't a or they were, they were very much closed APIs that we were talking to and exposing when it comes to Beam back up for AWS and even Beam back on replication now is that we have an extensible data platform that allows you to not be in the UI like.

Let's put it into those automation tools let's use and integrate into Terraform. We have, we have Terraform scripts that will allow you to use infrastructure as code to spin up your Beam back up for AWS and create policies based on that. And if you've got X amount of sites and um accounts that makes life a lot easier to be able to automate that away so that you don't have to worry about it, but also bring it into other systems and automate some of those tasks away as well.

And then another thing again, I've touched on this, how, what does granularity look like? I wanna Beam wants to be the most efficient possible, whether it's a cost efficiency or whether it's a a time efficiency when it comes to recovering workloads. I don't want to force you to have to bring back all of those VMs. Sorry. Yeah, I don't want you to have to bring everything back. Especially if you only need one folder from an EFS share. I want to make sure that you can dive in and take one individual file and bring that back in.

And again, especially when it comes to S3, the way in which we store those backup files, those backup folders, we're only gonna bring back what you need, whether that's on prem or whether it's in the public cloud as well. Because we still have to think about the egress charges that, that we're gonna get hit with when it, when we're taking data out of, of AWS, even from S3, even from backups.

And, and then another thing that we've been constantly evolving with is this ability around high speed recovery. So we coined and painted instant re instant BM recovery back again probably 13 years ago now. Um and that's the ability to bring up a machine back in two seconds. And we have the ability to do the direct restore to EC2. We have the ability to bring that back as fast as possible.

We have our Beam explorers that allow us to dive into application suites and bring out individual tables or in individual files again. Um and from point in time, so high speed recovery kind of goes hand in hand with that granularity. And uh if I can recover something really quick, then I haven't wasted time. I haven't wasted resources. I can bring it all back uh and be back up and running as fast as possible because the more you, your application is offline not working, the more money you're losing. So we wanna help you not ha it. So it doesn't happen to, to you.

So, yeah, so last 20 minutes of the session, we've got a few demos that we wanna run through as well. Um but it would be remiss not to talk about uh some of the some of the key areas, some of the key innovations that we've brought out from a, from a Beam perspective and that is the ability to back up your Amazon S3 buckets. So we're seeing a lot more production workloads, leverage your Amazon S3 buckets.

So being able to protect them in the same way that we protect unstructured data NAS EFS for example, and be granular about that recovery. We're seeing a lot of content delivery networks and I'll get on to a little secret project of mine in a bit that allows me to demo this. But also Amazon DynamoDB heard a lot from them uh especially around and i, i swore i wouldn't say anything about AI. So that's the only time i'm gonna mention it.

But if we think about servers and we think about how we, how we generate um or what we use data for is we store stuff in databases. DynamoDB is a great way in which we can store those cus functions and, and push that data into DynamoDB. We already have the ability to do that with from an RDS point of view. That's Aurora MySQL and uh Post. But in this new feat in this new uh version, it's the ability to offload that into an immutable S3 bucket as well.

And then also again, just enhancing that granular permission that we've that we've been talking about. So let's get into some demos. So firstly, DynamoDB backup, now I could just show you the interface and, and we could walk around that if we go straight in. So this is just a very simple web front end, sorry for the brightness. I didn't realize that was gonna light up the room like that. But basically i have a, i have a DynamoDB table and you can see that i have some important data, just some random data that i can add in there.

Can you click again, Julia? So that's a, that's reflecting on a, a DynamoDB table. We can add in AWS re invent what the product name is. Ultimately, I'm just storing some data in DynamoDB to show you how we can get stuff back after that. So let's place that order. Let's make sure we don't hit one of those red buttons because especially before i've taken a back up.

So, ok, I've added that in there. You'll see some other funnies in the funny, uh, inputs in there. If you were able to look, i have my, uh, DynamoDB table already in and you can see, yeah, that, that should tally to what you saw on the front end web page. Um, and yeah, you can see that we've added in our, our, our new data.

Now with him being back up for AWS, you can see that we have our simple UI EC2 RDS, VPC EFS and now DynamoDB, and we've created a policy already that allows us to, to protect that workload. You can see down here that we have a table called order table which reflects to what we have in the um in the DynamoDB instance. And i've just clicked to go. Let's start that, that back up. I've sped this up, but it actually doesn't take very long.

You can imagine 2020 rows of uh data in a uh DynamoDB database doesn't, doesn't take up that much uh that room or time. So back up complete. So that new item that i just put in. So now let's simulate a failure. And again, I'm in a different account. So this is in a backup account and I'm connected to my production environment where my DynamoDB is.

Now, if i go and cause that failure scenario by hitting that big red button that says i'm gonna clear the DynamoDB shocking it, guess what it did. It removed everything from that. So a very simple failure scenario. But ultimately, i now need the that mission critical data back.

So i'm gonna head back into Beam back up for AWS. I'm simply going to find the resource that i want to restore. Not in RDS don't do that. Yeah. Um i'm gonna find the job, the policy that i created. I'm gonna find the restore points that i have and i'm gonna choose that you can see here that i have one. I've picked that restore point.

I could have clicked that. What IAM role has the rights to get me back into that, that other AWS account. So i've done that and then what can't be restored? So the following tables cannot be restored because, oh yeah, cos the table's still there. I haven't removed that i was just maliciously hacked or some sort of thing just got rid of it.

So i need to actually remove this confirm because it might have not been a full breach. It might not have been a full encryption.

We might have just deleted a few lines of the table. So I need to go through what that process is, but we're not also here on that last page. I was able to restore to the original location or to another DynamoDB table. So that's where that migration piece could come in. But also the ability to leverage that data - how could I give that DynamoDB table to someone else to be able to take advantage of?

And I speed through this very quickly and then we'll go and refresh our application and hopefully you'll see, I know you'll see the, the 20 inputs of the, of the database. So a very simple demo just to show what we started with good data, added some data, caused a failure and then we recovered from that. So yeah, in there. And then if I refresh this page that should do the same, ok. All back and running.

The second demo that I wanted to show was that S3 backup. Now S3 backup, I decided that I was gonna create something called Beam Flicks. I hope I don't get in trouble for that. Um but here we go and basically it's a content delivery network that's gonna store the demos that I've been creating in an S3 bucket, but it's gonna expose them in a very simple web format. The, the admission here is that I'm not a developer, I'm definitely not a UI developer. If we go to it, you can see.

Um so within my S3 bucket, this is actually a follow on from that last demo where I'm also storing all of that data into an S3 bucket as well. So these are orders. So within Beam Backup and Replication, we have the ability to send our S3 backups to a local disk that could be a D2D device, that could be for regulation, that could be a different location. And then we also have the object storage to object storage.

Now, let's say maybe I've used another object storage option that may not be in the cloud that we're here for the event. And maybe I want to get into the cloud here. I want to get into Amazon S3. So we can do, we can get granular, we can restore those individual files and get all of those back up and running.

And then what I'm gonna do is I've got the ability to back up S3 compatible storage or other object storage options as well. It's quite difficult when you're being recorded to not mention other areas. And here's Beam Flicks. So I've got what a couple of demos on there. If we click on them, then obviously just to prove they, it is data. There are demos in there.

Ok good stuff. But maybe you see where they were - no, I don't know if I show it but they're on a different location. They're in another S3 compatible storage. They're not in my AWS account right now. So I have a couple of backups. You can see here, I have two backups and I can restore the entire bucket, I can be granular about what I recover or I can restore to a different location as well.

So we load our restore points and the options that we have here, you get to see we can choose our points. We've got a slider to see what happened. I actually took the first back up as we were uploading those videos. So you see that the change in the amount of videos that we had.

Now, where do you want to restore it to - the original location because it's a restore scenario or do you want to migrate it to your new shiny AWS or Amazon S3 bucket that we've created called re:Invent restore? And that's where I wanna go. So click next, that validates that.

Now, what do you want to do? Do you wanna skip restoring? Do you wanna replace older objects? What do you want to do with that? So we, we give it some options about what it should look like and then we start that restore function into our, into our new shiny Amazon S3 bucket as a migration.

Now, I've shown you, hopefully you've seen from a recovery point of view. Now, I'm gonna also, I'm gonna go into my app and now I'm gonna point it to my new Amazon S3 bucket. So I've moved my whole CDN through one backup job and restore into a new one in a matter of seconds. Granted, there's only five videos so it wouldn't take very long. But the concept is is that we can help you move that data as well as restore it.

Notice the click hyperlink - yeah, it's a different hyperlink. It just changed the blue if you even saw that. Whereas history, but now if you look up in the top corner, you can see that we're now going to that re:Invent restore S3 bucket and you see the content of that. And I think that leaves you 10 minutes to talk.

This is not a new feature. We've had it for a while already, but customers love it because it's still not existing on AWS, which is the VPC configuration backup. The Backup for AWS comes with a preconfigured VPC backup policy that I'm gonna show you on the demo and you can just enable it and it will take all the backup from all your VPCs from the regions that you select automatically or you can also select manually from which regions and which VPC configurations you want.

But let me show you here on this demo. So you're gonna see on the UI, it still doesn't have Dynamo on the top because we just released the DynamoDB feature. But here the policy I just enabled the policy and it's been running for a few days already and everything was a success. And then I go to the protected data to the VPC and I'm gonna show you everything that we're protecting.

So we're protecting internet gateway, the route table, everything that's on the VPC subnet and even the VPCs, we protect everything and we back them up.

Now, I'm going to my Amazon account and I'm gonna delete some things. So if you see here, it says this VPC will be deleted permanently and cannot be recovered later. So I'm going to delete it and I'm also going to delete a few other things. Let's see what else - the endpoints. I'm going to delete the endpoint as well.

I know I ended up cutting because it was a long demo - I deleted a security group. I'm going back to endpoints and I'm gonna delete it as well. And again, it will tell us that if we delete it, we cannot recover it later. So AWS tells us that, but obviously Veeam will be able to recover.

So now let's go back to Veeam Backup for AWS and we're gonna run our policy again so you can get all the changes. It's very fast as well. It's running. Let's wait for it to be a success and there you can see the changes, it tells us already what changes had been made.

So if an employee deletes by mistake or not by mistake, you'll be able to see there and there on the Protect the Data tells us that it was deleted. So on the top, you have that button Compare, you can click on there or you can filter by just the deleted configurations on the VPC. On the backup policy.

Then if you will, we want to restore, we can restore an entire VPC or some selected items. Some configuration items here. I'm going to select the restore point. The right, the one right before the, the deletion, the one that I just made. And here I'm going to select the items that I want to restore.

So I'm going to go there, click and I'm going to select the VPC that I had deleted and I'm going to add to my restore list. I'm going to just apply and then I'm going to add more. Yes, the project VPC that I, the endpoint that I deleted. I also had deleted a security group that wasn't on the video, but I'm also going to add that. Yeah, this is kind of like a shopping list - the things that you just like we said, very granular, you can select what you want.

And there is the IAM role here - I'm using just the default, but you should be using the one just with the permissions. You can also test to see if the IAM role that you select has all the permissions here. Just you can add a reason summary and finish.

Now, let's wait, should have worked. We'll go back to the Amazon account to the AWS account and let's go to our VPC, we refresh and we'll find that everything is back there. So, what I had deleted - the endpoint, the security group as well - everything is there and that's the beauty of the Backup for AWS. It's very easy.

So this, I really like this feature and now we have 5 minutes but wrap it up.

I think that the whole premise of it was obviously to give you an update around what we are doing from a cloud perspective. I didn't really touch on any of the cloud native stuff around Kubernetes and specifically around Velero's backup. But if you're interested in speaking to us about that, then more than happy to have that conversation now or another time.

But I think the important part to us is yes, we are focused on backup and recovery. We're very much involved in that data security type story and very much around disaster recovery, bad things happen. They happen daily. They don't always make the news. And it's about how we get stuff back.

The reason why we wanted to show that VPC is because it's so easy to make changes or things change for you in your AWS account. So being able to see those routing tables and be able to restore them between each backup and restore point and be so granular, that's potentially saving some companies.

So I think, yeah, we appreciate you being here. You can also reach out to us - our social medias are there. We're gonna be here for a little while for questions. I know we almost ran out of time. But yeah.

So I think the other thing that big ask for us is so as a sponsor of re:Invent and we get to do these sessions, we'd love to know feedback - was that useful, the right blend of demos? I'm sure you've all seen so many slides this week in sessions. So we try and put a lot of demos in there. Do we need more demos? Do we need to be more technical? I don't think we get the chance to say whether it's level 100, 200 etc.

Yeah, like in the survey, you can give us that feedback because that allows us to define what we do next year when we come back. And we'll listen to that. That's the best bit of feedback that we can get from that.

But yeah, I think with that, Julia, thank you very much and thanks for sticking around. Enjoy the party tonight!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值