Streamlining security investigations with Amazon Security Lake

hi, everyone. my name is matt. i lead our worldwide go to market for detection and response services here at aws. it's a collection of six similar security services. uh you're probably familiar with many of them including guard duty, security hub, inspector macy detective and now the new amazon security lake.

so thanks for coming to the session. i'll be joined on stage uh with ross, the security lake solution architect and andrew the head of cloud and product security at seek.

i'm gonna go over kind of some of the challenges on why uh aws actually invented security lake. and um and then i'll go into just like an overview, ross will give a demo and then andrew will talk about his uh practical applications on how they've deployed security lake in production.

what's interesting about security lake is, it's not an, it's not an acquisition, right? it was challenges from our customers. and at aws, of course, we work backwards from our customer. we were very customer obsessed and we heard all of these challenges that customers were having with building security data lakes, the the balance between it and security centralizing all these logs and information to do, you know, events and detections. and we wrote a duck, we staffed a team and we built a solution and that solution is amazon security lake.

and what it's intended to do is centralize and normalize your security related logs across your entire estate. so this is not just an ad bs story. it's a hybrid multi cloud story, it's going to provide long term retention. so it helps actually separate the detection capabilities and what you need for long term storage or historical investigations. and then it gives you uh basically freedom of choice.

so what's big especially with all these like bedrock announcements is optionality. uh you no longer have to be selective of your analytic services. security lake is gonna give you options on where you actually want to do your detections, build your visual visualizations and use preferred tools that your teams may have skills with.

and what this all comes down to is a modern data strategy, right? we got to get everything in the same place. it's got to be in the same schema. and if you simply search, you know, on google, uh what are the key pillars of a modern data strategy? the first thing they're going to tell you is a common schema.

so aws co-founded a open source project with splunk and a coalition of 18 other businesses called o cs f or the open cybersecurity scheme of framework. this is a collection of event classes and it's a community based project to help standardize and tempts security related data across multiple vendors and isvs. and it's actually been, you know, forbes has published articles, there's been 5/500 contributing partners and it's actually picking up some, some good momentum in the industry.

and although this project is completely independent of security lake, they are also co dependent on each other. security lake is built on this o cs f schema and there's, there's really two categories of partners which i'll talk about in just a little bit.

now, you know, part of my role here is really just to kind of get everyone up to speed on what the thing actually is. so security leak is a fully managed security data leak that exists in your account. it's a data, data management solution and what it does is it helps get source data into s3 and also help you manage subscribers to the data to send data to your preferred analytics vendors.

now, the lake itself is a collection of regionally dispersed s3 buckets, but it also gives you insights and the ability to manage life cycle and policies about the logs that you're storing in the lake. and it's done so in a way in a thoughtful way to make that data more available for analytics.

so at its core, security lake is a logging service, but it also helps manage all the underlying infrastructure needed to do etl and data movement now we're back to our google search on modern data strategy. the second thing it's going to tell you is an expansive network of partner solutions.

there is no one cloud offering that does everything right. we have different detections tools, we have different visualization tools. there's several pillars through the enterprise and different vendors that we use for things that we like. and there there are some of them that are better at things than others. and so we use them for their strength and capabilities.

now, there's two projects, right? this o cs f thing and this security link thing, what this list of logos on the screen here, they're security link partners. so not only do they contribute to the o cs f community, they build data in the o cs f template, but they've also worked with aws to provide a security like integration.

so they all have well documented instructions on how to set that up. some of them provide data to the lake, some of them get data from the lake, some of them do both, but they all do o cs f and they all integrate with security lake.

so now i have optionality. once the data is in the lake from all of my partners, i can then send it to the subscription partners. ok. so now i've got my common schema, i've got my expansive partner network. but what does it actually look like?

so this is kind of the mental model of what i see security teams moving towards. they want to use a data lake first. and typically, what happens is they have these logs like vpc flow logs, cloud tr management events and they set up uh a kinesis fire hose and they stream them outside of their account first and they do some correlations and analytics and detections. maybe they don't use all the data, 70% of it, maybe 0% from some sources and then they replicate that data back into s3.

and so we're changing the construct just a little bit. and the idea is that you would write that data to s3 first and then set up subscribers from there. now, what security like helps you do is the things that people don't like to do that's building glue catalogs, custom lambda functions, apply lake formation permissions, create the s3 buckets, the list goes on.

so security lake is going to automate those underlying orchestration tasks needed to have a full security data leak. and like you'll see, i don't know if we'll show in the console, but you'll see that to enable vpc flow logs. and security la is just a couple of clicks and that's all accounts in all regions. and it's in a consistent format queer by athena in minutes because it makes that data available for analysis much quick, much more quickly and more easily.

and then on the subscriber side down at the bottom, you'll see that i can have selection of who is going to get what data. so like maybe i don't need to send all my vpc flow logs to isv abc. but i do want to send all my collateral management events, my route 53 dns queries. so i can actually keep those b pc flow logs in my security data lake to do historical look backs by my my compliance requirements for long term retention, i can keep it warm for 12 months and then i can move it into a glacier or a colder storage for the next six months and maybe i expire it after seven years.

so this streamlined kind of security operations mental model is what security security lake accelerates and helps us achieve.

now, that was kind of uh the mental model behind it, but let's actually put it into practice and there's really, you know, a few things we want to kind of get together. it's all of our findings and all of our logs and you can get a lot of logs natively with aws. you can get a lot of information from your cloud management and events, your b pc flow logs, your dns queries, your security hub findings.

but there's some telemetry that's not gathered like device data or endpoint data, like with falcon data and crowdstrike or i have whiz findings or lace work, you know, there's threat intelligence feeds out there.

so what i would do is i would aggregate all my findings into security hub, which is a two click enablement to send those to security link. then i want to aggregate all of my logs and telemetry data in se in the actual security la. and that can be, you know, not only aws native logs can be my partner logs, it can be multi cloud logs, other, you know, major first party cloud providers, but also my on prem data, my cis logs and most of what the highest severity telemetry uh that we asked our customers, what they want to see is in o cs f already.

now, you might be wondering what about if it's not. so that's when i maybe work with uh you know, a provider like cribble or confluent that has security like data packs that does the transformations from cis log to o cs f. and they also have like other capabilities too like federated search or some event based filtering between uh between endpoints.

so then i can further streamline my my enterprise. so then i've got to actually use this data, right? what's the point in having everything if i can't create or visualize et cetera.

so then on the right, we'll analyze this telemetry and of course, there's dashboards with trellix or ibmq radar or um splunk or you can use native services like opensearch or quicksight to do the alerting and searching capabilities and dashboarding that you would expect in an analytic service.

however, maybe i just need to do a simple search like an sql thing to figure out a week's worth of vpc flow logs from three months ago that aren't in my sim anymore. well, i can use athena and that's a straightforward way to search logs with no data access and it's not a license. you add, it's just available at all times and you would just simply query the data that you need.

security lake logically partitions and organizes that data in a way that's compressed and easy to rehydrate just 3% of the original raw log size. and it's streamlined for security analytics. and you can the right.

i'm just giving some insight into how security, the things that security like actually does those list of uh s3 directories, security, like automatically makes those and pre and prefixes them so it can find where to put logs later. there's a schema data type which is in the glue catalog that security like automatically creates for you. and an athena query is as simple as just select all from some data set.

so hopefully that answers some questions about what security lake is and does and kind of the difference from existing security logging.

um and actually next, we have ross is going to give you a demo and i'll have you introduce yourself. it didn't die out good.

all right. so before i flip over to my screen, um this demo which is gonna be live, everybody cross your fingers. um came out of a customer conversation where they love security lake. but the, the guy who loves security lake needed to convince his stock manager that security lake was going to be useful for them, but they're not setting their vpc flow to any of their sock tools for lots of reasons.

and so he challenged me. he said, ross, can you show me, can you convince my sock manager? show me some athena query that would run against my 100,000 ip addresses that we consider io cs do a daily report and show me that every single day. so i can convince that soc manager that we leave the vpc flow over here. but let's go ahead and still use security lake.

and as matt talked about, i can consider what i want to send to their sim to sw to data dog to whatever. and so the challenge is get an athena query that matches all of their ip s that they consider bad or io cs get a quick um and then schedule that and somehow alert the stock manager that there's hits.

and so let's flip over. cool. so i asked the customer to give me a little sample of what their threat feed looked like. he gave me two lines. he didn't want to give me all 100,000 ip addresses. um it was in a tab separated format and so i actually went out on the internet and found from critical pa security. they actually go and grab open source threat feeds and combine them into um, their github easy, just search, they're tab separated. it was been very, very easily for me to build a tab separated table. i put them in three and then build a table.

athena documentation here was really, really good, you know, it took me five minutes to, to do that create table that's there. um i've already done the drop table in interest of time. um but let's, well, this is live, right? so we're gonna build a threat feed table from athena that should just take one second

And then let's, let's see if we have any type of IP addresses, right? Do we have some IP addresses? Did it actually work? Hey, we just built a brand new table off of tab separated files sitting in S3. Ok, easy. So I've got the first step. But you remember what I said? He said he's got 100,000 IP addresses that he wants to run across his VPC flow logs every single day and, and check that. So making sure I got that requirement. There's a half a million IP addresses.

Now, in this database that I grab from OpenSearch, OpenSearch open source. And so let's start building a query that will look for any hits in the VPC flow that I've got in my security leak. So we're gonna flip over to another tab and you can, I'm gonna run it and then I'll talk about it.

Alright. So as that's running, um I'm not a SQL expert. I got friends who are SQL experts. I had no idea what a left join was. Um and so I had to do a little bit of learning on that. Some of you are probably laughing because you know what SQL left joints look like. Um but really the, I think it's line 16, right? If you can see that on the screen, I am doing a match of my flow IP addresses versus the threat feed IP addresses. And then I'm just pulling out some things so that we can take a look at what the data looks like so pretty easy. But what I've got now is an Athena query that tells me I've got hits in my VPC flow.

So that top 104. I've actually combined it with the VPC ID so that the SOC manager or the SOC team can say, all right, that external IP address hit this VPC in this account and it actually was allowed. So if you look in VPC flow, there's a field called allowed or denied. Let's look at just the ones that were allowed. I think that's a little more important, maybe a little more risk, right?

So we've got half a million IP addresses. We are running queries now against that database to give a pretty nice indicator that something may have happened. So the next step we need to run this every single day. And so two weeks ago, I had no idea what Step Functions were. My uh co presenter from a different session said Ross, make it a Step Function. You can now then control all that, you can schedule it. And so I learned what Step Functions were the first.

Actually, I'm gonna execute this, we'll come back to it. So as it's executed in the background, if you don't know Step Functions, it's really nice. You can drag and drop stuff over, start putting some code in, it helps build all the permissions for you on the back end. But what the Step Function really is doing is I'm running that query that I just showed you that I built up. I and then have to actually go, the next step is go get the query results from Athena and I'll show you what those look like in just a minute and then for those results to show up decently in email or in Slack.

Um I had to actually write a little Lambda with some help for my 17 year old son, help me form out some of that JSON. Um so that is readable in email and in Slack. So really simple state machine, there's nothing hugely crazy about it but ah I thought it was running, sorry. So that will take a second to run while that's running.

I will go to the end of the story. So I took that state machine, got it working just great for that single run use case. And then I just took it to EventBridge and made a schedule and it is now running you. If I show you my email, I'm getting emails every single day of my fake IP address hits.

Um and I'm also getting Slack notifications. Once again, this is not a pretty Slack notification. Um I just kind of started down the path of here's a hit. What account is it in? What's the match? And then I'm starting to think about what are the next steps that the SOC should do? Should they open a ticket? Should they um isolate that IP address out of this, my co presenter from the other session and I are actually gonna write a blog to um highlight and utilize this whole solution to have some more accounts or actions in there.

Let's see if that ran it ran. So Step Functions are cool because it's all just code. Yeah, it looks like drag and drop, but it's all just code underneath and doing things. So there's my big ugly query there and then I get the results from Athena and the results are here down in rows and this is why I had to actually reformat everything.

The first row is actually all the headers from Athena. Um and then below that are all the values. So it really didn't look pretty in email. Um I ended up reformatting and everything. So this is now what the SOC is going to get in email. They're gonna get findings, JSON findings role and all the indicators and all of those are the things that I did throughout the query.

Um I showed you Slack already and you can show that was an earlier version, but that's what I'm getting an email right now. I need to do a little bit better formatting my other to do that I want to do for this is now that Open Source threat feed gets updated daily or weekly. I now need want to automate that so that the SOC can now continue to update and get new IP addresses. And every time this runs daily, I now have any new hits or any of that new threat intel.

Um so that's just one way kind of to utilize the data in Security Leak in VPC flow in, in the databases and you know, bring in some thread intel and do that. But I'm really excited for Andrew to come up and he's gonna tell you the story of how he's even using Secure Lake more so than just Ross's fun. Athena queries.

Yeah, cool. Awesome. Thanks Ross. And uh thanks also for having me here at re:Invent to um talk to you today about what Seek have been doing with Amazon Security Lake today. I want to take you through a bit of our journey on our challenges that we're looking to solve in terms of um around security logging and event data management and how Security Lake fits in with that.

Then I'll take you through a bit of aaa demo, not a demo, but a an example of how we view Security Lake in real incident response scenarios. And, and also at the end, have a look at our future roadmap and where we think we're going with this in the future. Anyway, I hope I have something interesting to share with you today and if you're thinking of using Security Lake, maybe some inspiration as well. Alright.

So who is Seek, we are the largest or the leading online market, uh online employment marketplace for Australia, New Zealand and Southeast Asia? Or if we put that, in other words, we're a website and an app you use to search for jobs, apply for jobs and hire great people for your company as well.

The company was founded back in 1997 the good old days of the internet. If some of us here are old enough to remember that and we are um headquartered in Melbourne Australia on any typical day, we will serve up around 50 million search requests. And that's kind of amazing when you think about it because the population of Australia is only 26 million, which is smaller than the states of California or Texas.

If you look at us more broadly across our Southeast Asia region and across all of our brands. um we have relationships with around 60 million uh candidates, nearly half a million hirers and we operate in a marketplace of nearly a billion people today Seek operates and runs virtually all of our production workloads in AWS.

And obviously, that's a large operational uh footprint as well. So with that footprint obviously comes some challenges around uh security logging and event data management. And this, this challenge doesn't just apply to cloud. Obviously, this is um uh obviously applicable to our entire technology environment. And the first one there is of the challenges is just the sheer volume of logs today. The the amount of log data that security teams today are expected to manage is just sort of relentlessly increasing on top of that.

Um compounding this problem is the just, I don't know, insane number of security tools that we have in the industry as well. We're deploying those into our organizations, that's all generating data. So we have to deal with that that feeds into um us having to manage multiple solutions um different ways of managing all of our data for us.

For example, we have one way of dealing with our VPC flow logs, another for CloudTrail and yet another for how we uh work with our EDR logs. For example, that in turn creates potential operational overheads, there's a cost, a nontrivial cost of maintaining all of these different moving parts at all times as well.

Also just if you're um uh in the operation space or an analyst having to work with all different types of applications all the time having to know where the data is, is also um an overhead. Lastly, there's the lack of standard and standardization.

Um in the past, we tried to standardize towards the Common Information Model, the CIM, but we never really got the traction we needed to uh realize the long-term benefits of that. And so I think in that instance, there's a, a real loss of opportunity for when you don't, we're not using standards.

Um think of um automation scenarios are much easier when you ha when you're applying standards and like um well-defined schema just make coding and documentation so much better and shareable within the organization. So with all of those challenges in mind, uh a little over 12 months ago, we had reached a point where we had started to review this, our strategy around security logging and event data management.

And we'd realized that what we needed to do was actually apply engineering effort to the problem. And we'd started working with our internal data science team and trying to define and design a data lake for our security data. And then this time last year at re:Invent Security Lake was announced. Thank you. And that really helped us focus our efforts towards what we were designing towards.

And so in terms of our security and business needs, the first one there we really wanted to set the foundation as separation of concerns as a foundational uh architectural principle. So something that was modular and flexible where we had clear abstractions over data ingestion storage and query.

And also we were able to leverage open formats like Parquet. And now obviously also OCSF. Another key aspect to the design was that we really wanted something that allowed us to deliver value to the business incrementally over time that we weren't reliant on a big bang release at the end. So that was a key principle.

Also performance is also another one. We have situations where we struggle with uh query performance. So if you're an analyst or doing um operations that can be really frustrating, we f also focused on operational efficiency, dropping those costs of ownership of um all these multiple solutions and also just generally dropping the overall uh cost of ownership for the, the entire uh set of data.

And finally, we wanted to build a foundation on which things were extensible, reusable and could open us, open ourselves to um more capabilities uh for the same data. So that's what this slide is talking about here. Obviously, the basic um capability is an incident response, but we wanted to be able to expand out towards threat hunting capabilities and threat detection, look at high risk user behavior, um also facilitate activities like attack path mapping

And so what you'll notice there is we've gone from incident response, which is a very reactive security use case to now expanding out into more proactive use cases like threat hunting and attack path mapping reporting and metrics was also a core business need that we needed to support.

And the last one there, cloud operations and engineering support was something we kind of come to the realization over time as we were doing the design that the data stored in. Um the data lake didn't necessarily need to just support security use cases. We thought we could add value to other parts of the business with this data as well.

So this is where we are today, more or less as a high level diagram, not detailed obviously with our use of security lake. So we're using uh all, well, we're ingesting all the supported native AWS sources like VPC flow logs, CloudTrail, Security Hub and DNS logs.

Um this part for me was kind of when i initially saw security lack was the most exciting because what we had had the opportunity of doing here is decommissioning some of our um old legacy automations of how we ingested flow logs, CloudTrail and Security Hub. And for the DNS logs, this is the first time we had visibility of these. We've had, we've had a an item on our backlog for a long time to actually build an automation for the DNS logs, which we'd never gotten to due to priorities. But now we don't have to that's all managed by Security Lake for us, which is really nice.

On the custom sources side, we've got some uh partner integrations like with CrowdStrike and Netskope. And we've also worked with AWS Professional Services as well as Carbon Black to build out our own uh custom source for the Carbon Black logs to be ingested in secu into Security Lake.

So that's on the uh data source or yeah, the data source side being pushed into Security Lake. On the subscriber side, we've started out with Athena that is obviously out of the box. We have a highly capable team that are comfortable also writing sequel queries in Athena. So on the next slide, I'll go into much deeper detail on some uh real world examples there.

But talking about that cloud operations and engineering support, we had a team come to us um a few months ago and who were looking for some help from us to figure out why there was a high volume traffic egress in that gateway in a particular VPC. So we were able to write some uh Athena queries against the DNS logs and the VPC flow logs to help identify what the source of that problem was. For them.

We're currently running a proof of concept with Splunk, trialing out that integration with Security Lake and uh testing out those, those queries and the last one here, uh everyone's favorite security activity, vulnerability management. We all love it, don't we?

Um this one was a real surprise. We had reached a point uh with this uh application where we wanted to ingest Amazon Inspector findings to surface those onto our dashboards. And we were contemplating an integration with Security Hub. But we also were looking at the data in Security Lake and realized that that was actually a really good use case to make the vulnerability management a subscriber of Security Lake.

So it's now in production making queries every day on the Inspector findings that are being pushed through from Security Hub. So it's, it's interesting that we're finding use cases we never expected to occur.

So as i mentioned, this is uh some real world examples that go into a little bit more depth about uh how we've used um Security Lake. These are, these were both benign findings. So just saying it's not social media worthy information, they're interesting just from a security incident, investigation point of view, but they also highlight the benefits and the value we got out of Security Lake pretty early on as well.

So the first one here is a finding from Amazon GuardDuty threat detection service and it was an unauthorized access event to an S3 bucket from a malicious IP caller from one of our custom threat lists.

Now anyone who has worked in security for some time or in operations or as an analyst, you get to know your environment and there's certain events that you see that make you sit up and take notice. This was one of those, um, I'd only ever seen this, this particular event in when we'd done our own internal testing.

So we jumped on this really quickly. What we knew from the GuardDuty finding were a couple of things. First, we knew it was a production S3 bucket. It had customer data in it, not particularly sensitive but it was still customer data. It wasn't a public bucket. We knew that because we've got internal controls to prevent that happening.

And we also could see that it was a get object API operation against an S3 object that was made with valid credentials that were issued by STS. So we kicked off our playbook, we went and searched in our SIM for the IP address a against CloudTrail and got no hits.

We did the same using our legacy VPC flow log system to uh look for the IP address and got no hits. So it was kind of, it was getting interesting at this point. We'd had Security Lake running for all of four weeks by this point in time.

So we decided, well, let's why not? We'll pivot into Security Lake and we started writing a query in Athena to first of all verify the finding that we'd had or the lack of finding we'd had with the VPC flow logs again, Security Lake that we got no hit on that IP address either for flow logs.

Then we did the same for the CloudTrail logs, wrote a, an Athena query, coed CloudTrail logs and Security Lake and got a hit. The difference this time was that we'd actually had a finding on a s a CloudTrail data event rather than a management event.

And so once we dug into that, what that showed showed us that there was an application that was in um serving up presigned url s to a client uh for the S3 objects for the customer to upload and download data. So it was actually behaving as designed which was good. Uh and it turns out that the, and we were able to verify that later actually by looking into the source code once we made all those connections, um it turns out that the IP address that was on our threat list had been real located some time before.

And so we had to actually remove that from our custom threat list.

The second one here was an indicator of compromise investigation. Uh we had some intel provided to us that uh a well-known advanced persistent threat ac a or a pt had was targeting companies like ours and that we'd been provided an IP address to go and search on.

So we went and did that in some of our systems and got no hits. But this time, rather than use our legacy VPC flow lo uh system to search, we went straight to Security Lake. The reason for that was, look, this is looking for a needle in a haystack problem. And we needed to search across all of our accounts, all 350 plus of them uh across multiple regions and you know, a variable time window.

So it's a, it's quite a big search across all that, all that data. Um and Security Lake was able to um provide a hit for us really quite quickly. We were able to identify a host running in one of our accounts that had been communicating with this malicious IP, which meant that we kicked off a forensic investigation again, it turned out to be benign, which is fortunate because i can talk about it.

And so, yeah, and then we were able to close that investigation out. So the main points i want to make about this in terms of where Security Lake was really helpful. In the first scenario, we had visibility over new log data that we'd we'd never had before and that was able to resolve uh and get a determination on that uh incident fairly quickly.

And then the second one, the performance increase we saw in terms of querying our flow logs was something we could never do previously in our legacy system. We probably either wouldn't have got a result or we would have had to have chunked that up and it would have been quite complex to actually get that, that result in the end.

Alright. So where are we now, where is the, the future of where we think we're gonna take this? Well, we're definitely gonna be adding more data sources to Security Lake. We're gonna be, we'll look at Octa and Proofpoint and Github and other internal tools as well as additional DNS and firewall logs.

But the next thing we're really gonna be focusing on, that's a core theme is expanding out that proactive security use case to support things like threat hunting capabilities. And in order to do that, we're going to layer in this enrichment context data.

So exactly what like what Ross showed in the demo, we'll add threat intelligence feeds from various sources and other enrichment data like business context from Workday and other internal system, data threat contexts, data from services like VirusTotal DomainTools and AbuseIPDB.

And when you put all that together, the value on the subscriber side starts to really open up the options that you have there. We are um a user of Databricks and uh we have a fair bit of uh capability in house of um Databrick skills.

So we're gonna investigate and experiment with that as a subscriber to Security Lake and see if we can't get uh some threat hunting capabilities from that. Likewise with Amazon SageMaker, we're about to kick off a proof of concept hopefully in the next few weeks. Um also to trial some threat, threat hunting uh scenarios as well, we're gonna continue to invest in integrating Splunk with Security Lake and as well also our vulnerability management platform.

So just summing that up, we really, at the moment, we're just at our first iteration and our use of Security Lake, it's already delivering value to us every day in the. And we've, i believe we've really got a solid foundation to build on to, to move forward as well.

And as i've said, this is a long-term strategy. It's something we're gonna use to incrementally deliver value in the business over time. An interesting lesson has obviously been the unexpected use cases that have come out popped up like the vulnerability management platform.

Um and i think that lesson sort of tells us that i don't think we've fully unlocked the potential in all the data. When we start to put it together like this, it's gonna get really interesting. So i'm kind of pretty excited to see where this, this he ends up and where it takes us. And um yeah, so that's all i've got today.

Uh thanks heaps for listening. I'm gonna hand it back to Matt now to wrap up the session.

Cool, thanks man. Thanks Andrew. Yeah, so security investigations. Um i think there's, i've seen a couple of different personas have success here. One of course, is the security teams, the other teams are the infrastructure teams.

So not only are there security investigation use cases like Andrew talked about there's reporting and email notifications that Ross showed you. But there's also things that you can do, like, you know, just easily find all your cloud cloud trail errors or set up a generative a i large language model to ask natural language questions to your structured data in Security Link.

And that's available on AWS samples today. Uh you can ask questions like what are the resources that have the most critical findings and with no code and no skill, Security Bedrock and an integration with Security Lake. We'll give you those answers as long as it, of course has the underlying telemetry.

So i just wanted to share some resources with you. I saw some of you take some pictures already. Um there's a few assets, honestly. My favorite one is the uh how to visualize Amazon Security la findings with QuickSight. It's a three, it's a three part blog and it kind of works like works us through like getting started and within a minute of enabling Security Lake, you can query those logs in Athena.

Um and then setting up. QuickSight is just um a number of minutes soon after. There's also two more subsequent Security la sessions that are a bit more intimate and we feed you and those are tomorrow Thursday. Uh there's a breakfast and a lunch. Uh Ross will be there. I'll probably stop in and say hi as well.

So just wanted to say thank you for listening to our session. Please remember to rate uh these QR codes are uh a direct link to our LinkedIn page. And yeah, hopefully you learn something about Security Like today.

So thank you all.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值