Experience a simulated ransomware event: Learn now to prevail later (sponsored by Rubrik) (Rubrik)

Joe Dabrowski (MC/CIO): Hi, my name is Joe Dabrowski. I'm gonna act as the MC and the CIO today. Behind me is the other Joe who will act as our narrator. We've got Aaron here as well. Aaron will play a couple of roles. He'll play a line of business leader and a Tier 2 operations analyst, along with the business continuity manager. You'll notice a few of them have hats. The hats are color coded, right? So the top role for Aaron, line of business, he doesn't need the hat. When he steps into his Tier 2 role, he gets a hat. When he steps into business continuity manager, he gets a gray hat next to Ms. Donovan.

Donovan is gonna be our IT Ops Manager in our CO. You're in for a fun day. Oh yeah, and then we got Maddie, my good buddy. How you doing, Matt? Doing well? Awesome. So Maddie's gonna play app support manager, cyber resilience manager, and SOC analyst. And last but not least, a real fun guy, legal - it's Daniel. How you doing? Good, good, good, good.

Alright, that being said, let's get in here. Alright, so Scene 1 is gonna drop us right into the discovery. It's early in the morning, some things are going wrong in California. Let's get into it.

We're at a bullpen in an IT operations center in California. The Tier 2 analyst has shown up for his 5:30am shift and it's taken a few minutes to grab a cup of coffee and get settled. At 5:45am, we open to him logging into the operations console to see the ecommerce website is down.

Tier 2 Analyst: Oh no, this cannot be happening. Why is the website down? I should have got a page about this hours ago. This is not gonna go over well. I'm sure I'm gonna get blamed but let me get things back up and running first and I'll deal with the rest of this later.

Several nervous seconds tick by.

Tier 2 Analyst: What the hell, all the executables and the data files have been renamed. Nothing is running. This is not good. Wait a second, what's this file? What the f**k is "we have your data .txt"?

The analyst continues to type on his keyboard for a moment, then stops and immediately picks up the phone. The time is now 6am.

The Tier 2 analyst calls the IT Operations Manager on duty and wakes him up.

Tier 2 Analyst: Donovan, it's Aaron. Sorry to wake you up with bad news, but I think we have a big problem. The website was down when I came in this morning. I've been trying to bring it back up but the library looks like it's all renamed. There's this text file in here that says we've been hit by a ransom.

Donovan: Are you sure you're looking at the right screen? Can you access anything?

Tier 2 Analyst: Oh I checked it three times, I'm sure. And no, all I can see is that the files are encrypted and the ransom note gives instructions to contact them.

Donovan: Ok, before we do anything, we need to convene a Major Incident call that would include the CIO, representatives from IT, operations, application support, business continuity and information security.

The Tier 2 analyst kicks off the MIM process which notifies everyone in the call tree that there's been an emergency and invites them to join the conference call. The time is now 7am Pacific and the bridge has been opened. Let's listen to the call:

Donovan: My lead analyst came in to find that our ecommerce site has been down since about 10:30 last night. He couldn't kick off the restart scripts because they were all encrypted. We can't read them. We do have a ransom note that says we can get the decryption key by establishing contact with them. In the meantime, I've called the application support guys to see if they can reload the files from their release libraries. They're working on that now and we'll report back as soon as they have an update. In the meantime, we're going to keep this bridge open while we continue to check the rest of the production environment. Please reach out to your manager and let them know what's happening. They can join the bridge if they have any suggestions or questions.

They break from the MIM call for now, leaving it open. 30 minutes pass until the Application Support Manager joins. The time is now 7:30am Pacific.

App Support Manager: We've been trying to figure out what's been impacted in the attack. My team has checked all 24 of the servers for the app database and web tiers and they all show signs of compromise. Some are missing one or two key files and some are missing all of the content. I mean, the content's there but it's been encrypted and we can't unlock it. We're going to have to restore them from our backup and get the system back online.

The other problem that we saw was that the release library for the point of sale systems in the physical stores was also compromised. They automatically download the latest release content from the servers at 12:01 each day. So just after midnight, they downloaded a bunch of unusable garbage. This is a huge problem for us because now the registers are unusable until we can recreate the release content and trigger a new download.

Donovan: Wait, so you're saying we can't process any sales transactions in stores?

App Support Manager: That's right. At least not electronically.

Donovan: Ok...the first store in Chicago opens at 11 o'clock central. So that's 9am our time, which is less than two hours. Where's Aaron from business? Can he join the call yet?

App Support Manager: Doesn't look like it. I'll call him.

Donovan: Ok thanks. I'll call Donovan on his cell. I'm sure his team has already briefed him but he needs to hear this directly from me.

The gravity of this update sinks in. In the next few minutes, the CISO joins and after being read into what has happened thus far, he officially declares this to be a cyber incident following protocol. He then brings in Legal and HR onto the call who decide to notify all involved via email that any and all written correspondence related to this issue is considered privileged and confidential. They explained that that means that no one involved can discuss the incident with any internal or external parties unless those parties have been read into the event by the CISO or his empowered team members.

The official declaration of a cyber event and hearing the imposing formal language from the legal team gives the situation a new weight. Everyone on the call feels a pit in their stomach form as they realize it's now 8am Pacific and there's only one hour left until the first door opens in Chicago, with over $433,800 already lost during the nine hours that the website has been down.

The team is now faced with the challenge of what to do with the stores. So that is the end of Scene 1.

So what are we seeing? What are we feeling? What are we hearing? Well, one - I'd argue that a lot of things broke. Right? The alerting process was down for 7-8 hours before they found out about it. And yeah, there's a lot of people trying to help, but I don't know if anyone's actually helping yet, right?

But that's one thing that jumps out at me. I think too, they're talking about the MIM call - anyone represented one of those meetings? They're pretty expensive, right? Also what I've observed with those calls is it's not really the goal of those meetings to figure out what went wrong, it's just to prove that you weren't the problem, right? So you know, the whole finger pointing thing going on.

Question for you to think about as you talk about your work and that of companies today - do you consider the backup team a cyber tool? Does your backup team talk to your cybersecurity? Do they talk about how they would recover? Do they harden the backup environment? Have you taken any action on that? These are the questions that you can use in the background, these are the ones that keep people up at night.

So at this stage, he's been down for 9 hours with about $433k in the hole. I still haven't actually made contact with the ransomware attackers. There's a lot we don't know. Ok, the time is now 8:30am Pacific.

We're back on the Major Incident Management call and 30 precious minutes have ticked by. The cyber resilience and business continuity manager joins the bridge with the line of business leader and is briefed on the current recovery efforts.

Cyber Resilience Manager: I wanted to bring everyone up to speed on current plans for the retail stores. We've been discussing the issue with the point of sale systems. The line of business leader understands the current problems, but his view is that, well, actually I'll let him speak to it.

Line of Business Leader: Thanks. We've been discussing whether or not it makes sense to open as planned with our senior leadership team. Given the current issues in our physical stores, there are a few perspectives that we've considered - like the impact of revenue if we don't open, the media inquiries that will surely come as the word spreads, etc. What I really need from this team is your realistic expectation of recovering the point of sale systems in the next few hours so that I can make a recommendation.

Donovan: Understood. I'm sure that you've been brought up to speed on the ransomware attack and that we're working on restore operations as quickly as possible. I'm committed to having everything back online later today and I have a high degree of confidence that we'll be fully recovered by tonight. I agree with you that keeping the stores closed would likely only make things worse, especially if we have a way to continue selling without a point of sale system for a while. So we do have a plan that we've just developed with the help of your team, but it involves some older technologies and processes that none of our in-store personnel have ever used before. But we think it will work as a temporary solution as long as we're only talking about a few hours until we're back up.

Line of Business Leader: Oh we absolutely are. I know you're already working with my team and we'll support your decision either way. Which way are you leaning?

Donovan: So we've already had the sunk cost with the buildings, the staff, the inventory. So going manual until we're able to re-establish operations is probably the better option even though sales will be impacted. Keeping them closed guarantees that we'll maximize the losses. And we just can't afford to do that given that we have a viable option to operate them without their registers. I'll confirm with my leadership but opening as planned was the decision that we came to as well. And after hearing your comfort level, I'm sure they'll be on board too. So let's go ahead and move on knowing that we're going to go ahead and open.

The time is now 9am. The business continuity manager hops off the line to supervise the roll out of the store sales transaction process without the use of registers.

Donovan: Hey app support team, IT Ops - where are we with the restoration efforts?

App Support Manager: So we've been working with our IT analysts to restore the affected code. But our application release libraries as well as the application source code libraries, they're all encrypted and we're unable to recover them.

Donovan: Yes, that's right. I'm not sure how widespread the problem is but at this point, all of the normal routes to recover operationally are impacted. We're using our configuration management database to identify which servers and databases are used in the web app to find existing usable backups to get us back online. The downside is that proper recovery relies on the application teams to keep the CMDB up to date and current. If it's not current, we could recover incomplete data which could break the app or worst case, there could be compliance risk with not knowing where the PCI and PII data is. Seem to be what are the chances it's current?

App Support Manager: I don't know enough right now to give you a confident answer. Let me double check and I'll circle back on my end.

Donovan: I just spoke with a senior partner at our external security consultant and our teams are connected. They have all their best people working on it to give us some guidance. We should have an update soon.

The call breaks for now with IT Ops hoping for an accurate CMDB and the CISO hoping that the external security consultant can offer significant help.

The time is now 9:30am Pacific. As part of the team continues working on the store opening, two teams that don't typically work together have been trying to collaborate with the external security consultant. The internal cyber resilience team is a part of IT Ops and is primarily focused on restoring operations. The cyber incident response team is part of the CISO's org and is focused on managing the forensics and threat analytics/analysis of the event. Both are on the line now with a representative from the external security consultant.

John: Thanks for jumping in so quickly, everyone. We're hoping you can help us. Our number one goal is to restore operations as quickly as possible. That needs to be our focus.

Unfortunately, this isn't a scenario where we previously defined in our business continuity planning exercises. While obviously, we do need to restore, the first priority is to find out where and how this attack originated so that when we do restore, we don't have to get wiped out again.

We've quarantined the impacted servers and isolated them from the production network. We're using CrowdStrike to gather what we can from the affected endpoints and we'll have that information made available to you. That's a good start.

I've looked at the ransomware note. I'm going to guess that this is a Platypus attack. It fits their TTPs and we've been seeing a lot of their handiwork in the last couple of years. We'll get started on analyzing what you've gathered. We have some IOCs that we can share with you guys to start checking the rest of your environment for other infections and to get to reasonable confirmation that this is in fact, Platypus.

The call breaks with the external security consultant analyst promising to get that IOC list over as soon as possible. The time is now 11am Pacific.

Let's check in on the MIM call. Cyber legal is now on the line and opens the call:

Legal: Yeah, this is legal speaking. After consulting with the General Counsel, CISO and CIO and federal law enforcement, we've established a contact within the ransomware group. The ransom that they set is $1 million and they want it in three days.

Putting a number and deadline to the event makes everyone shift in their seat. Sensing the pressure rising, the CIO speaks up:

CIO: Listen team, I'm confident that this team can get our data and systems back online quickly. We drill for business interruptions all the time and this is no different. We run thousands of backup jobs daily. We've spent millions building our defense in depth approach for years, which includes a 3-2-1 strategy. I'm confident that all the preparation will pay off and I have no intention of paying a million dollars to a bunch of criminals in Russia.

Acting on that assurance, the leadership team decides not to pay the ransom at this point. The next contact with the ransomware group is scheduled for 24 hours from now. All efforts now turn to finding the last known good recovery point to get back up and running as well as working with the external security consultant to detect the scale of the attack and how to respond.

Unfortunately, the two PM MIM call comes and goes with no demonstrable progress and two pieces of very bad news:

  1. No luck finding a last known good backup
  2. Backup scans are showing signs of encryption

The time is now 6pm Pacific. Everyone is back on the MIM call.

John: Alright team. Finally, some good news, everyone. The operations team has identified a set of backups that do not appear to be compromised and they have begun restoring. The estimated time of completion is six hours from now, right around midnight. After which the application teams will restore the application server, encode images and conduct acceptance testing before bringing the systems back online fully. Our next update will be at 2am to allow the app team to do their work after the restore is completed.

In the meantime, the CEO is asking for an update. I'd like to hear from our cyber resilience team.

Cyber Resilience: So in terms of meeting the RPO and RTO business recovery objectives, the RPO is essentially unachievable due to the lack of a recent known good backup that would provide the required 5 minute data recovery point. We've already exceeded the RTO of 12 hours as of 10:30 this morning.

Joe: Yeah, unfortunately, all of the stores have reported significant impact from the manual checkout processes. We've had long lines to start out, but customers eventually just left frustrated. They didn't make very many purchases and store managers are reporting an expectation of 75% revenue loss at this point.

No one seems to be envious of the CIO's task to give that update to the CEO. They drop off the call planning to rejoin for the 2am call after the restoration and the application testing is complete.

The data restoration was completed successfully just before midnight and the application teams began restoring the application and conducting testing at 1am Pacific.

The application support manager unexpectedly rejoins the bridge with an urgent update:

App Support: We lost the damn test environment again! We nearly completed the acceptance test and the system went down. Is Donovan on? I'm trying to find out when we'll be back up.

After a barrage of texts, the IT Ops manager joins the call. The app support manager restates his unfortunate update and the IT Ops manager goes to investigate.

The time is now 2am Pacific. All key personnel are on the call.

IT Ops: The application and data files look like they did before we restored them. So we did some checks on the file attributes and we found that they were re-encrypted again just a few minutes ago. I hate to say it, but the only way that's possible is if the malware is still in our environment and triggering reinfections.

The CIO also just asked that we pull the off-site tapes, bring them on site and then try to restore from them. I know the date is over like 30 days, but I don't know what else to do at this point. Everything we have on premises seems to be infected with whatever malware is doing this. It'll take our mountain until about 10am to get here.

Legal: Yeah, this is legal. On my end, we had an emergency call at midnight with HR CO and the media team to respond to the incoming media requests from the press. They've been building throughout the day. Exec management has authorized the media team to respond with the acknowledgment of quote "ongoing technical issues that are being addressed" and that Keller has quote "full confidence that it will be able to resume normal online operations and retail operations very shortly."

After a few more updates, everyone prepares to sign off for the night until Cyber Legal suddenly chimes in:

Legal: Whoa, whoa, whoa, hold on everybody. Unfortunately, we just received a second email from the ransomware actors. They've increased it to $1.5 million.

John: Whoa wait, why would they increase it? We still have over 50 hours left to pay.

Legal: Yeah, we do. But it looks like they're just trying to up the pressure on us. Apparently it's a fairly common practice. I'm sure our CSO can confirm this as well. They know they're in control and can demand whatever they want.

An uneasy silence settles in the room. We hear the CIO answer another call in the background. After hanging up, he addresses the MIM:

CIO: Well, that was Erin our CEO. In light of the new ransom amount and the current, uh, lack of progress in recovering, she's convening an emergency board meeting to review the decision not to pay the ransom.

They closed the call with the CIO looking decidedly less confident than a few hours earlier. He is going to have to manage the politics of who will be blamed if this next recovery attempt isn't successful. He'll also have to manage expectations around the data loss if they have to recover from 30 day old tapes. He desperately tries to think of other creative solutions and who else can help internally. He starts questioning his own decisions during the day and is frustrated that he's not more prepared as he pieces together his presentation for the board. He's starting to consider this may be a career defining event.

Alex: "My team has been great in helping us understand why that's happening. I'll let him speak to that in his update. But the net impact is that it's not always the same files that could be encrypted. So we're having to try and piece together what we need bit by bit literally."

John: "Ok, I mentioned tape. What's the downside of recovering from tape? Why don't we just start with that right now? We're considering it the last resort for a couple of reasons. The most important of which is the data freshness. If tapes were the only way to recover, we'd need to apply all the point in time snapshots for the last 30 days to the database to get us back in sync where we were on Saturday night, provided that those snapshots are even clean because of that. Our plan is to stay the course and recover the files that we know are needed from this. Once we have clean copies of each, we can put them in an isolated environment and move on to the next wash, rinse repeat until we have everything we need to recover."

Alice: "Got it. Thanks, John, appreciate it."

David: "Yeah, sure. We have a total impact of about 2.76 million."

Alice: "Ok, great. Thank you, Alex."

Alex: "This group is linked to a certain nation state. Our name, the attack vector that they use, we believe is a zero day exploit in one of our SAS applications. We haven't seen any proof in the wild yet, but it's probably because it's difficult to replicate. My team has been disassembling it in our lab and we believe the primary use of attack is to leverage a precise combination of revert execution and a buffer overflow vulnerability exploit to overwhelm the authentication mechanism quickly escalate privilege and begin lateral exploration of the environment. The virus is able to evade our defenses due to its holomorphic nature. We believe that both the code and the encryption targets change each time it runs, which makes it very difficult to isolate and remediate."

Alice: "Alex, I'm not sure I followed what you just said. Can you just tell us simply how did it happen? And can we prevent it from happening again?"

Alex: "No, not right now. Not yet."

Alice: "Ok, Alex. I'd like you to continue to work on that and get back to us in the next update, please."

David: "Yeah, this is having a significant impact on our plan. We need to take care of this as possible. We invest a great deal of budget year over year on cyber and business applications and resilience. I'd have expected that we'd be able to handle this type of event pretty easily. Clearly, that's not the case. As I mentioned, we've lost 2.76 million in the last 36 hours. We continue to lose money by the hour, our models show we're on pace to lose about 200 grand an hour and that's not even including the lasting financial impact and the reputational brand damage, which right now unfortunately started talking with."

Alice: "So that's our direct physical impact. Is there any potential of fines or anything else that we need to consider?"

Legal: "First, it's important to note that right now we only have a potential data breach and I stress the potential because we don't know for certain yet whether or not data was exfiltrated. So that's where we are right now. Now, if we do determine that there has been a material breach involving PII, we'll need to notify the privacy officer in every state. That's usually the attorney general for every state. Each state has different privacy laws and disclosure requirements for PII. So I'll update you there. Our notifications will also have to include states where customers cross state lines to shop or bought from the online portal. So we're effectively talking about notifying all 50 states. The biggest potential regulatory impact right now though is from GDPR. Our online web app does have European customers and they're protected by GDPR. GDPR has very specific targets for when we have to notify and how much potential fines are for failing to meet those reporting guidelines. The current GDPR guidelines carry fines of up to 4% of global revenue. So we're potentially looking at a $40 million fine."

Alice: "So how long do we have to notify them if we're certain there's a breach of PII?"

Legal: "We have a maximum of 72 hours to notify the EU. So if we find that EU PII was compromised, specifically, we'll have until Wednesday night to take action to make the notification. I'd like to go ahead and request to retain expert outside counsel to help prepare. We're gonna have a raft of state, federal, civil lawsuits in the US as well as GDPR fine lawsuits and other potential damages in the EU."

Alice: "Approved. Let's go ahead and get going on that."

Alice: "I think we're gonna break now. Everyone please be available in case we need any additional information. The board's gonna chat now and vote on what we think we should do on paying the ransom."

I think it was just last week. There was an incident where threat actor got into a US based company, that company did not disclose the breach. So the threat actor was nice enough to call the SEC on behalf of that company. So that's an evolution of tactics. So we're going to start seeing that more. I mean, they were posting this on dark web before Tor browser, stuff like that. It's now in plain text being put on the public internet. So the bad guys are really ratcheting up their pressure.

Anyone heard of uh software called Progress? Move it. It's pretty popular piece of software. The bane of many people's existence for the last six months. So it's a file, secure file transfer software in essence, what the bad guys have done have attacked all of these companies and it's a supply chain style attack. There's been over 3000 global organizations hit by this Movement breach in the last six months.

Anyone here live in Maine, the state of Maine, the state of Maine announced that every citizen's data was spilled during that attack. We've not seen one incident of encryption in any of those attacks again, close to 3000 of them. So the bad guys are evolving their tactics. The question is, are we evolving our practices to keep up with it?

Do we know where our sensitive data lies today? Do we know where the open access is? Are we taking steps to mitigate that risk host breach? This is the number one question your cyber insurer or maybe even the FBI will ask you, the FBI will probably roll in with printed reams of the sensitive data. They found your cyber will say show me the sensitive data impact before we do anything onus is on you to figure that out.

So we're gonna get into s uh scene five. Now, Keller's down almost 40 hours, they're down three and five cents million and they've got to contact the bad guys and figure out what to do. Let's get into scene five.

Keller's cyber legal and external counsel have compared these results with a small data extract from the ransomware group that was provided to assure Keller and that the data was actually taken and could be decrypted by the attackers at will because the extract contained examples of P I as well as credit card information, both of which were stored in an unencrypted format by the applications.

The recommendation has been made to the board, CIO and CIO that we should assume that we are officially on the clock in terms of meeting the various data breach reporting requirements of the local state and federal authorities as well as the European Union regulators.

The board is also committed to launching a full assessment of the problems with the classification and volume of data in Keller systems. Once the ransomware event has been resolved to ensure that the risk can be properly quantified for future needs.

CIO and CISO have been informed that the cost of the assessment which will be carried out by an independent outside authority will have to be covered from the existing CIO budget in the next calendar year. No additional funding will be provided given the likelihood that any cyber insurance coverage will be limited due to the numerous issues found so far with Keller's cyber resiliency capabilities.

The time is now 9 p.m. Pacific. The external security consultant analyst is on a call with the Keller SOC analyst and line of business leader. He's explaining the process for how to discover how far back this malware infection goes. Let's listen in

Zima has been working to identify the ransomware attack, uh actors responsible for the attack based on known tactics and techniques and procedures. We have a 90% confidence level that this was, in fact the work of the Platypus gang, we've identified a set of attack vectors that we believe were used to infiltrate killer cyber defenses and gather up the operational intelligence needed to remain undetected while planning the attack.

As a next step, we need to have the Keller threat hunting team leverage the specific IOCs to hunt through the rest of the environment for any signs of them. This will help us determine how much remediation work it will take to clean up the environment and prevent anyone else from using them against you.

Again, based on the experience of previous Platypus victims, the team here believes that the actors have had probably had had uh privileged admin access to your guys' systems for as long as six months.

I don't want to take us off track here, but I really want to have a better understanding of what the IOCs are and how can they actually help us?

Yeah, good question. So IOCs are indicators of compromise. They're like fingerprints of a certain ransomware attack. So for example, we know Platypus likes to disable Windows Defender protection and replace it with key with a key DLL. So the SOC team will go look through the environment to find a place where Window Defender is disabled, which will let us know that the Platypus team has compromised that part of the environment, there's hundreds of these types of, you know, fingerprints um that are associated with, with Platypus. And so we're gonna need to give you a list of all of these fingerprints so that you guys can go and dust for these prints.

Ok. So when we find those fingerprints, we're going to be able to determine how long the bad actors have been in the environment. It seems like a pretty intensive process. So, how long do you think that's gonna take?

Yeah, good question. I mean, it really depends. Usually we give your team the IOCs and they'll go and write scripts to check the environment. And it all really depends on how quickly you guys can turn that around. If you don't even have any of the tools to write these scripts, they'll have to do the search manually and you know, that could take a while.

Well, we'll get started on this as soon as we can. We, we still have to cover all the normal SOC traffic too. When we've tried thread hunting in the past, we've mostly just used very manual processes to check config s on critical servers. I wish I could give you a more exact time frame. But since we've never used these tools, it's really hard to estimate, but we'll do our best after the external security consultant gives the team the specific information to search for.

They spend 12 hours conducting spot checks to the environment not impacted by the ransomware event. The net result is they've found multiple occurrences in a limited number of places they've checked. Dating back at least four months.

The board is set to reconvene to hear updates on the data restoration efforts, legal work and recommendations on next steps. The management bridge call has remained open throughout the night to keep everyone up to date on various team efforts. There's been no significant progress reported on the restoration of the ecommerce systems or the retail point of sale systems.

All right. Well, this was my plan of recovery. I probably would have started here maybe a couple of days ago. Right. But, um what we're up against are really kind of two different sets of foes, right. We've got what are called the advanced persistent threats. These are the nation state actors, right? These are the people that want to get into your network. They'll figure out a way to do it. They're generally looking for things like fighter jet designs and ships and designs of nuclear power plants, all sorts of things like that.

What we often see and what's on the news every morning when I wake up is what we call razz games or ransomware is a service. These can be similar folks, similar personas coming from the same trained by the same people, but they're allowed to basically operate without fear of retribution in many countries across the world.

Um these gangs are loosely affiliated. They're constantly kind of shifting things around. Um they're constantly rebranding and changing their techniques. Um there were some leaks last year. It was called the County leaks where, you know, the internal chats of one of these groups were put out in a public forum. Turns out they had an HR department, they had a bonus structure for recruiting people. They have, they had internal politics just like John does, you know, here in Keller.

So, you know, while we can say it's like, you know, some guy in a hoodie in his mom's basement eating more meat loaf. It's really not what we're up against, right? We're up against many, many different. So the good news of all of this there is any is that dwell time has been significantly reduced at least in the generality.

So dwell time, the the amount of time between when the bad guy gets in your network and when they execute their attack. According to Sophos in their latest report down to five days last year, it was in the 1214, 15 day range. There's still plenty of examples of people, you know, 68 months, 10 months, something like that. But in general, the bad guys, especially the razz actors are trying to monetize things as quickly as possible, right? So they're really after the money of it.

So Keller 58 hours into this 5.2 million still haven't done anything with the bad guys. And I hate to do this to you, but we're gonna leave you on the cliffhanger no scene six. So in the real full version of this, we do give you some closure, we do tell you what happens. It's probably what you're expecting. But um what do we do with this data? Right?

So let's just take a step back, right? So what we call this type of event is a denial of data attack because if you think about it, whether you're encrypting your data or taking it out of your four walls, they are doing things to you that can make them money and causes you a lot of harm, right? That harm could be downtime could be fines. It could be the reputational side of it.

IBM released a report last year that said the average cost of a breach is $4.34 million which does not scare me. However, the thing they said in that report was over 50% of those costs were accrued post one year from breach. So there's a significant long tail associated with these because of all the cleanup that we've talked about over the last couple of years, the bad guys have gotten really good, right? They know how internal IT operations team, what they do, right. They're looking for backups that were built on Windows platforms. They're looking for repositories hanging off the network, right? They're looking for files with open access. They're looking for copies of databases that somebody put on a share somewhere, they're attacking the cloud too. They're attacking SaaS. Where does a ton of sensitive da data lie? Like your email system? Most people over the last couple of years have put that up in the cloud, which is great, the availability is there and all that, but it does not absolve you from having to protect your information and be able to recover it. It's called the shared responsibility model, right? It's a real thing.

So next slide. So Rubrik at this point, I think we've got about 6, 7000 global customer, uh six or 7000 global customers. And within support, we stood up a tiger team called the RRT. The rapid response team. This team has helped hundreds of organizations recover from cyber events. So we've learned a couple of things. The first question you want to ask yourself is will my backups my backup copy survive an attack? And the second cop, the second question you wanna ask right away is my backup environment or my capability survive an attack. We've had so many people focused on getting the data, just put somewhere else that they don't realize that they need that actual environment to recover. So that's question number one, question number two is how do we begin to recover? Talked about this today. What date changed during the attack? There is no tool that tells you that easy today. How do I get a clean copy again? The bad guys who put malware all across your backups. And if you just go and grab a random one, high likelihood you'll reinfect yourself. Or you're gonna take the time to grab a slice of the data. Put it over here, run it through your EDRXDRXDR whatever you got in iterate. So it's gonna take time again, time and data loss, your metrics.

Third thing, what did they exfiltrate? Is it a value? Do I need to alert? Does it change my response? Fourth thing. How do I recover my organization, my operations, my workflows, which order do I do them in? What are the most critical to me? Is that something I can automate? Is that something I can test? Is that something I can demonstrate? Oh, wow. Somebody just asked me about cyber insurance rates. Maybe this is important to them, right? So there's a lot of different people within your organization that are gonna touch this type of thing. And it's a lot different, like I said before in that DR kind of test you run, right? You've never 10 years ago, you never had to worry about a sequel server after you recovered it, lighting itself back on fire. You do today with cyber attacks.

So next slide here's how we help, right? So whether you're on prem, you're running workloads, you lifted and shifted or you're using SaaS Rubi can make sure you've got recoverable, make sure you get recovered. We've got encryption detection. Which itself, I'm just gonna tell you is not, not that cool except for two things. One is if that encryption detection tells you what data change so you know what to recover, that's gonna be helpful. Your encryption detection against backups is just, oh, it changed at 7% and it's been six, not helpful. The other thing too is a majority of the attacks we've seen in the last six or eight months have been the least on prem been the MLE attacks. I mean, you've got no endpoint protection there and they can take out a majority of your infrastructure very, very quickly. So how would you detect those kind of a hidden factor? And oh yeah, maybe your tools are built upon your VMware infrastructure too for on prem. So that's gonna hurt you on the recovery side.

Three sensitive data discovery today we scan all of your backups and say you've got sensitive data here here and here, open access over here, do something about it, host breach, here's what they got for thread hunting. You can run all of your backups, we can scan them for IOCs y rules, whatever you wanna use. The nice thing about Rubric is it's a hidden backup copy. So if you're under attack and you start threat hunting against the hidden backup copy, bad guys aren't gonna know you're monkeying around so they might not raise your rate right.

And last, but not least let's automate as much as we can. Let's demonstrate to the business that we can recover from this. Because in this, in Keller John said he was good. They test for this, they drill this all the time, but it was very clear they had never done this before. Unfortunately, that's a common occurrence in my travels.

So let's go to the next slide. All right. So what do you do with this information? Maybe I made you laugh. Oh, probably Donovan made you laugh. I maybe made you cry. I probably ask you some questions. So first off, if you want us to bring this to you, the full version, right? The ones where you sit down and read and you see all the videos and all that. We will do it. I've personally hosted 60 of these in the last year. So we'll bring them to you or we'll come to your city and you can do them with your other peers. If you've got like some information security group, we can do that. However you wanna do it, hit the QR code and we'll get you hooked up two stop by the booth 1352 and then three top golf tomorrow night shooter mcgavin. And I think we all know what he has for breakfast.

So with that, I want to thank my esteemed colleagues for doing so wonderful today. I want to thank you all for coming. Hope you learn something. Take this back, go ask the questions go be a hero. Thank you.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值