Improve SaaS application security observability with AWS AppFabric

hi, everyone. welcome. hope everyone had a nice lunch. uh my name is tamar hayman. i am a principal product manager on aws f fabric and with me.

hi, everyone. my name is j, a partner solutions engineering manager at splunk. i lead a team that built, that builds integrations with aws and other strategic partners and i'm thrilled to be here today.

awesome. um i'm excited to be here with you today anoush and share with everyone what we've been working on together. you should listen to the session if you are experiencing sass sprawl in your organization. if you care about achieving improved observable over the sas applications that are deployed in your organization at reduced effort and lower cost. and if you'd like to strengthen the overall security posture of your entire stack from uh your infrastructure layer to your s a layer.

uh but just a quick question before we start. so i know who's in the audience. uh how many of you are in an infosec or a cloud security type function? awesome. it or dev ops. awesome. great.

uh well, regardless of the function you're in, uh this session will be very applicable. uh to your day to day because um no matter the function, the challenges of getting insight and observable over s asap s is uh applies to. all right, wouldn't you agree? awesome.

so just what we'll cover in this session today, uh we will unpack some of the challenges uh involved in achieving observable over s asap s. uh we will show how aws app fabric addresses these challenges and simplifies the complexity faced by security. and it teams, we will demonstrate what you can do with the data that i aws app fabric transforms and provides in tools such as funk. we will summarize some key takeaways and have time for q and at the end.

but first, let's talk a bit about what are some of the questions that security teams face and what are they trying to answer? well, they ask themselves um some core questions because security teams are responsible as you know, for identifying and fixing vulnerabilities in company security systems and programs and to ensure the overall security posture of an organization, they ask themselves key questions. they and their leaders uh who has access to company data and sensitive data. uh what are the users doing with this data have settings on applications changed? leaving the company exposed to answer these questions. security teams rely on audit logs from applications in order to track user events and um anomalous activity getting these audit logs into security tools that make sense of a lot of this data um involves manual and complex work. security teams have to manage a multitude of point to point integrations from each sas application into one or more security tools that they're using. gartner estimates that um companies with over 1000 employees use an average of 100 or more applications. so that adds up to quite a lot of point to point integrations that security teams need to manage and oversee.

let's take a look at this challenge from the perspective of a particular persona. meet christa. christa is a security engineer in a financial institution. let's call it acme corp. she uses splunk as her main security tool and she decided that um her team needs to start monitoring audit logs from three applications that are listed on this screen because they're getting increased usage among the employees and contractors in the company. those applications are zoom dropbox and zendesk to get these audit logs into slunk. krista needs to complete a series of tasks. she needs to read a api documentation for each one of those applications. for the, each one of these api s. she needs to build integrations for each one of those applications into the security tools with an add on. she needs to normalize that data into a common schema that's compatible with splunk. in this case, she probably wants to enrich the data to help support threat detection rules and she needs to force the data so that she can apply threat detection rules across multiple resources, both infrastructure and sas c os. we have interviewed have told us that for their teams to do all of these activities, it takes weeks or even months per application. and while third party security tools such as plunk and others may have app integration supported on their app store. the reality is that a lot of these integrations and add ons are actually created and supported by the developer community and not by that third party security vendor themselves. so the performance and reliability that customers can expect will vary quite a lot. and in addition, these security vendors like splunk uh are also being asked by their customers to continue supporting more and more um sas application add-ons and integrations out of the box, which takes away from their ability and bandwidth to continue investing in threat detection capabilities and a delightful user experience, which is exactly where we all love for them to continue spending their time.

so as a result of all of these uh things, security teams are bogged down by a lot of point to point integrations, data pipeline management and that takes away from their ability to focus on the strategic value add tasks such as monitoring and threat detection. um we've heard these pain points from customers time and time again and i'd love to touch on a few themes and quotes with you today. we'll get back with krista later in this presentation and check in on these challenges.

so customers, we spoke with um shared a few themes with us. the first is that they said detecting anomalous behavior on sas applications that are deployed in the organization is critical, but it involves a lot of operational burden. they've said that their teams are spread thin and they often have to trade off data pipeline management tasks with threat detection and event response response tasks. they've said that normalization specifically is the most painful task related to consuming uh sas audit logs in their preferred security tool. and so what happens is that teams are forced to prioritize which apps they're actually integrating into their security tool and monitoring, which leaves them exposed because certain apps are just not being monitored to address these challenges. we recently introduced aws app fabric, a fully managed service that quickly connects the leading sas applications deployed in your organization with your preferred security tool.

thus, this service aggregates normalizes and enriches audit logs uh from the leading south applications that are being used out there. ultimately, it will also improve the experience of the app users themselves by um offering a generative a i powered assistant that can recommend tasks the app users, the knowledge workers can take across different applications without leaving the context of the app they're in. but for the purpose of today's demonstration, we're focused on the security use case and security personas. however, stay tuned for the end where i'll kind of reference. some additional sessions you can attend to learn more about the cross application of productivity aspects.

so some, some key things to call out about this diagram here. uh the main thing is that customers have a lot of choice uh by using aws f fabric, they can choose the format in which they'd like to get and consume their audit logs. they can choose to get their audit logs raw or normalized into the open cybersecurity schema framework. and we'll touch on that in the next couple of slides. they can get the data uh formatted in json or part k. they can also choose the destination to which aws f fabric will send those audit logs. the customers, amazon is three bucket or an amazon kinesis data firehose. and of course, from there, they can also point to um amazon security like if that's where they'd like to aggregate and centralize a lot of their audit locks.

so what are the applications that are supported by appfabric? uh well, we're currently supporting 14 applications and have preannounced um another 10 that will be supported by the end of the year and they're listed here on this screen. the top part uh lists all the sas application data sources that appfabric pulls and transforms audit logs from. and we also support five security destinations and a six that we've preannounced today as well, uh which will be supported by the end of the year and just stay tuned because next year we plan to accelerate the coverage of our apps to triple digits. so we're accelerating dramatically the number of sources and destinations that are supported as part of the service

before we continue. let's just pause and take a moment because a lot of you are probably thinking a fabric that sounds a lot like some other services that i've heard of at aws. so what's the difference? and i'd like to just touch of about um to touch on a few services and how a customer might choose to use each one of them for a different use case or purpose.

so first, let's start with a bs f fabric. we just discussed this and learned that it quickly connects your sas applications and preferred security tool and transforms um those audit logs into o cs f schema, a security centric framework. and um you can choose to um ingest and transform the audit logs from all of the sas applications were listed on the previous page and consume them in a variety of destinations.

amazon appflow. however, um you will be able to use if you'd like to securely transfer data between s asap a s asap like slack and maybe an aws service like amazon s3 or redshift. and you should use this. if you wanted to build a solution that connects slack to another application using a custom data transform, it will not transform the data into the o cs f schema it will not be security centric.

and amazon eventbridge um is a service less server, less integration service um that uses events to connect sas application components together. um it makes it easier for developers to build scalable event driven applications. so you could use eventbridge. for example, if you wanted to get a notification via s ms every time someone opens um an issue in github tomorrow.

um really quick if i can add a point there. um i just quickly wanted to say that plunk also integrates with several of these services um like invent bridge cloud trail, cloud watch security hub and lake. and uh there's also a few ways to get this data into plunk, which we will talk about today. awesome. thank you.

great. um before we explain a bit more about how aws appfabric addresses the challenges that krista and similar teams face, uh let's talk a bit about why these challenges exist in the first place. um so we'll do this by walking through an example that looks at a real life scenario.

uh in this scenario, we have two users, jane doe, one and j do two. they're taking actions on two different applications. in this scenario. it's a dropbox and slack. the actions they're taking are considered suspicious because they could indicate that company data is being exposed and these actions are also being logged in the audit logs of these applications. api s.

so in this example, jane doe wan is an administrator on both dropbox and slack. she um removes all of the administrators in dropbox, leaving her as the sole administrator. she then proceeds to download an irregular amount of files in dropbox within a 30 minute time frame. and around the same time, she's also in slack giving permissions to j do two which allows him to share data on slack channels, making them publicly available with people outside of the company jo two proceeds to do just that. uh he generates a link that can be publicly shared where through which uh company data can be shared with people who are external to the organization.

these events are happening in different apps, they are suspicious, they are represented differently. as we can see with the examples on the screen. each application has a different way to represent similar actions. and we don't really have a way of knowing just by looking at this if jane doe one and j do two are represent the same actor or human or they're entirely different identities.

so as we see in this example, there are a few reasons why making sense of these events across different sas applications and different event representations is complex and resource intensive. first, the data frameworks are quite different across applications. so dropbox organizes its data around folders and files slack around messages and channels, an app like asana or jira, for example, will organize it around issues and projects. so it means that um security teams need to have a real understanding of the data frameworks and hierarchies for each application.

second, um it's usually not sufficient to just understand these and then find a common schema to normalize all of this into which is also very resource intensive. you also in many cases need to enrich the data to make sense of it. so for example, in this case, um a customer may choose to enrich it with a unique user identity to help know if jane doe and j do one, j do jane doe one and j do two are in fact the same actor or two different actors

And then lastly, uh different API's, different applications, uh will vary in the ingestion and integration approach they offer. They will have different rate limits for their API's. They also will have different freshness of the data, so how often they will, uh, generate new audit logs and events. And so it means that customers also need to develop custom integration approaches for each one of these applications.

So how does AppFabric address these challenges faced by security personas?

Well, first we address the normalization challenge. We've adopted the Open Cybersecurity Schema Framework, as I mentioned a few slides earlier, OCSF, for those who may not be familiar, is an open source effort led by AWS and, um, leading partners in the cybersecurity field.

What AppFabric has done is defined its OCSF schema by working back from the MITRE ATT&CK framework, an industry accepted framework to, um, document common attack vectors and techniques, and it's based on real life observations. The reason we did that is because it helps security teams identify the most immediately relevant information out of the audit logs that SaaS apps generate.

And also AppFabric, um, extended the OCSF schema to better represent SaaS applications because monitoring audit logs from infrastructure, uh, services is quite different than doing that for SaaS audit logs.

Audit logs of infrastructure services will look more at the performance and reliability of a service. For example, SaaS audit logs, um, are important because they help us, um, look at anomalous user behavior and so combined...

Um, and sorry, in addition to the normalization, we have also, um, enriched the, um, audit logs with a unique user identity. We rely on data such as corporate email and so that addresses that scenario where we don't know if Jane Doe one and J Doe two are the same actor or two different ones.

So combined, the enrichment and the normalization help security teams extract more value more efficiently out of SaaS application audit logs.

And just some highlights of the OCSF schema framework that AWS AppFabric uses... As you can see, we're using the same categories and event classes that OCSF has, but we've extended with the, the activity names to introduce, um, and represent, uh, SaaS behavior, um, more effectively.

So we've highlighted, um, a few activity names in bold blue font and they represent and some of those actions that we saw in that scenario with Jane Doe one and Jo two.

So when J Doe one removes other admins from the Dropbox app, that would fall under "Revoke Privileges". When she assigns Jo two with, um, elevated permissions to be able to share public, share data publicly, that would be under "Assigned Privileges". When she downloads the regular amount of data, that would be under "Export" and so on.

So right now, customers are managing a web of point-to-point integrations where they have to map each application into maybe more than one security tool.

And, um, with AWS AppFabric, customers no longer have to do that. Uh, they have, they don't have to manage these integrations. They don't have to deal with data transformation and normalization and they don't have to maintain data pipelines.

Uh, we do this all for the customers and no coding effort required whatsoever.

So how easy is it to set up AppFabric for your company? Well, that's what I'd like to show you next. So I'll go and set up a demo. Just give me one minute.

Yes. Oops, sorry. Alright. So first, um, to find AppFabric, the product detail page, uh you go to the AWS console. Uh I'm already on the product detail page now, but you can search for it in the search bar and this is where you'll land.

Uh this page has some useful information for you to check out if you haven't already - how it works, diagram, documentation and FAQs - a lot of good info, but let's get started so I can show you how easy it is to get this all set up.

I will call out a few things on this page. Um there are essentially three steps, um, to get started with AppFabric and start consuming audit logs through AppFabric from your SaaS applications.

The first step is to create what we call an "App Bundle". And a bundle is a concept that is similar to a container and it contains all of your, um, AppFabric resources information.

The second one is to authorize the applications that you want AppFabric to start pulling audit logs from, uh, by providing, um, authorization credentials. And we'll get to that in a minute.

And the third is that you provide us with your preferences as it relates to ingestion, the configurations, so the format, the destination and so on.

So let's go through this. We'll start by creating an App Bundle. So here, um, you have a choice of, um, what, which type of key you would like to use, uh, to encrypt your, um, AppFabric app credentials and your information. And in this case, um, you can either choose an AWS owned key or a customer managed key.

Regardless of what you choose, they both meet the high security standard that AWS is known for. For the purpose of this demo, I will use an AWS owned key. Easy.

That was a, a bundle created. Uh, for the second step, you will, um, create an App Authorization. So here you can choose, uh, from this drop down menu, uh, which application you would like AppFabric to pull audit logs from. As you can see, this list will continue to grow, the more apps we launch support for.

For the purpose of the demo, I'll be using Zoom. And here you'll see that this is the information you need to provide, um, in order to authorize AppFabric to to proceed.

A few things to note on this page. Uh, for a lot of these, um, terms that appear here, you have, um, additional info. Uh, so what "Tenant ID" means for Zoom - it means the account ID for this particular Zoom organization.

Uh, the Client ID and Client Secret are generated when you create, um, an OAuth app on the Zoom developer console. And in addition, you have for each one of these applications, we were, we are linking a documentation or a guide, uh, for how to find this information on each of the apps' um documents and resources.

So for Zoom, uh, what we need to do is go to the Zoom App Marketplace and build an OAuth Server to Server app. Alright. Um, essentially all of the app credentials we need to complete the, um, App Authorization step are in this page and we can copy these credentials into the AppFabric console.

However, I do need to complete this whole workflow and publish this, um, app before I can actually complete that. So let's copy this information here. Okay?

And again, I have to continue with this very basic and simple workflow. Some information here is required to proceed like company name, developer name, an email. Everything else you can mostly breeze through.

For scopes, uh, our documentation points to, uh, a bunch of, um, scopes that we recommend adding. For the purpose of the demo, I'm just choosing one. And we can activate the app and as you can see, there's no intent to publish this. This is just for our own purposes and we can proceed to create the App Authorization. That's it.

Uh, an App Authorization was created, very simple, in just a few minutes. No coding, no complexity whatsoever.

And then the last step is, um, defining my, our preferences for how we'd like to consume these audit logs. So our App Authorization, the only one we created so far is for Zoom. So I'll pick that one.

In terms of destination, I have three options - Amazon S3, an existing bucket; Amazon S3, creating a new bucket for these audit logs; or Amazon Kinesis Data Firehose.

Um, I will use an existing bucket which I very conveniently already created. And I also can choose now the schema and format preferences.

So we can choose normalized into OCSF in JSON, normalized into OCSF in Parquet, especially helpful if you plan to send this to Amazon Security Lake, or raw in JSON.

Um, another important thing to note here is that you can actually create different ingestions for the same source of audit logs if you'd like to consume, uh, these logs both normalized or and raw because you may use them for different purposes and different, um, use cases.

So we'll set up this ingestion and that's it. Um, those were the three steps and here in S3, I will already have these audit logs, um, sent to that destination as I've specified.

So, as you can see, those were very simple steps that we took. They took what, less than 10 minutes, maybe five minutes. Right, Anoush? Um, no coding. No, nothing. You do this one time and forget about it. You don't have to come back here and do this again. You do need to do this once for every application, but that's it.

I will switch. Alright. So that was very easy. And, um, we went through this. So before we go there, um, so now that we've set up AppFabric and we're starting to get audit logs into our S3 bucket, um, how can customers extract that value out of the audit logs that AppFabric has delivered?

So we wanted to make sure that we were meeting the pain points of customers and, um, making this immediately valuable for them and worked with Splunk, uh, to ensure that the experience that we offered, uh, removed the friction end-to-end.

Um, so what we did was we jointly surveyed, uh, AWS and Splunk customers earlier this year to deep dive the pain points that they had related to, um, consuming audit logs, um, through SaaS audit logs in Splunk.

And we surveyed, um, customers in financial services, media and entertainment, um, with professional services, uh, customers with 10,000 users on their SaaS apps, customers with just 1000 users. We talked to CISOs, cloud security architects and so on.

And they all told us very clearly that they love their security tool for the powerful threat detection capability it offers, but they'd really love to reduce the effort they're spending on all this data pipeline management.

And can you share a bit more with the audience? What you've heard?

Yeah, sure. So based on the survey results from Splunk, as you can see, these are, these are the key findings and customers also talked about some of the challenges and priorities.

And as you see, 62% of them were ready to take on a solution that can help ingest just SaaS application logs in OCSF format into Splunk.

And about 96% of them overwhelmingly said that they would really like these logs normalized.

And about 83% of our survey respondents emphasized that the solution needs to address their top three security use cases, which is identifying improper access and permissions on apps, um, tracking app configuration changes, and thirdly, detecting suspicious login activity.

Right. Yeah, I love, um, hearing these data points every time we touch on them. Uh, they're really powerful. Um, let's check back in with Christa.

Uh, as you recall earlier in our presentation, uh, Christa wanted to get logs from three applications - from Zoom, Dropbox, and Zendesk.

She's now done this in the AWS AppFabric console which I just demoed. It took her less than 30 minutes to get all of this set up, pretty quickly, and she's starting to get audit logs streaming through Amazon Kinesis Data Firehose.

As you recall, she uses Splunk as her main security tool and she's already set up this Kinesis Data Firehose for her other sources. So that was not a lot of effort for her to get that data into Splunk from AppFabric.

She's also already set some, uh, rules for notifications in Splunk. So she set up some rules where whenever there are events such as escalating admin privileges on SaaS applications or, um, enabling public sharing on applications, she gets notified via direct message in Slack.

And that's what occurs right now. So she's getting a direct message in Slack which was triggered by one of these events that she set up rules for.

And Anu, she will demonstrate to us what Christa can do in Splunk when she gets such a notification with AppFabric data - how does she investigate such events? Can you? Give me just a moment.

Yep. So, alright. Okay.

Tamar: So uh thanks for that uh context Tamar.

So Krista, our sock analyst here at Acme Corp is uh getting those AppFabric logs into Splunk now and she's also received this threat notification like Tamar just talked about.

Now, let's just see how Krysta uses Splunk to investigate, detect and analyze what this suspicious activity is. So let's let's go ahead.

So CISA lo uh Krista logs into Acme Corp and uh this is her customized dashboard with all the panels that she has. Hans spunk here. The very first panel to the left shows all the applications that are sending data into plunk through AppFabric. Um it's listing Zoom, Zendesk, Dropbox, Slack and Google.

Krista is also streaming all of our AppFabric logs via Kinesis Firehose. So it's all real time. And as she moves on a few things raises her eyebrows, there are a couple of alerts here. One of them, the assigned privileges alert is what she got notified for and her top KPs are also trending looks like. And these are the KPs that Tamar talked about earlier, which are basically SF categories right now.

Before I move on, I would like to quickly show the recent edition on uh for Splunk is the OCS FSI M transformation or an add on is what we call. This is where Crysta this is what Crystal uses to map all the OCS F data to Splunk SIM or common information model.

So what is common information model? Uh Splunk common information model or more properly known as SIM helps users normalize their data so they can understand and get context out of it within Splunk. It ensures that all information from different vendors, systems and sources can be integrated and understood consistently.

Now, thanks to the recent update of the add on it made it easier for Krista to read and write AppFabric searches also help with correlating that data across multiple applications and also made her searches extremely performing.

Now back to Krista's KPIs. So the identity and access management category has information of systems, authentication and access control models, the group management, even class within that category reports, management updates to a group, membership axes and things like that.

And the application activity SF category gives detailed information about behavior of certain applications and services, the web resource activities, particularly points to cru updates or set of actions that you take on web resources.

Right now. As Krista goes further, she gathers from the trend lines right in the panels that an absurd amount of user activity has occurred in the last few moments or minutes. These KPIs have crossed a certain threshold that Krista had set for herself, right, which are basically based out of normal patterns and traffic that she's seen every day. The color coding also tells her that some metrics are way beyond her threshold.

Now, persist, scrolls further and she looks at traffic by application and vendors and she notices that she noticed a spike basically that her Zoom application is generating a lot of information. So she now wants to look at it in a timeline way. So activities invent timeline by application. In her case, she chooses Zoom and she notices that a similar amount of web traffic as pointed out by our KPIs like create adding users deleting users, they've all been generated within the application.

She takes a closer look. Um all these applications or events have been generated around like 4:30 a.m. this morning. So she makes a note of that really quick and then she jumps into the parallel chart. Um this shows activity by actors. So from the cumulative percentage overview line, she kind of notices that about 73% or so of traffic is being generated by just two users and specifically one user, the top user is generating 54% of the traffic.

And to her surprise Johnny here um who's generating that amount of traffic is actually his email domain does not really match with the company domain, it's his competitor. So, so what she does is she drills right into that chart. And so she sees a barrage of activities as Johnny has done all these events have occurred in the span of last few hours across multiple applications and even many vendor products.

So she quickly gets onto a Splunk search window and writes a search to see the timeline view of all those activities and it looks like it's also occurred about um yeah, let's say 4:30 a.m. right? And for instance, she starts to get into these events, for instance, she looks at the update configuration event and what's it all about?

So it looks like Johnny has updated configurations of many users or resources specifically. In this case, it looks like he made some changes in Dropbox where he's changing member policies and also access policies on a set of resources which could be, you know, sensitive material. So that's not good.

And some of the other events that she notices are about adding users where she sees that Johnny has added a group of admins. Johnny's also removed a group of admins and also updated also abated roles of certain admins within specific applications or different applications.

Now, this definitely is very concerning to her. So she, what she does is she next jumps on to the overview dashboard back, clicks on the assigned privileges to actually see the alert and gets to the event and to her surprise, it's Johnny again and he is somehow switch to admin privileges within Slack.

She makes a quick note of his IP address, his email. Anything else that she can create a report out of late? But what she couldn't figure out is how actually Johnny got access.

So what she does is she next raises to a Splunk search dashboard. Now this is where she uses the OCS FSI conversion, right? Uh Splunk with all its data models is able to pick up events across different applications using this common information model. And she's written the search so clearly that she could pick up the very specific activity where Johnny is being the malicious and switching admin privileges.

And to her surprise, it looks like one of the events where Johnny's actually received admin privileges through someone at the company Henry. So she clicks on that event and dives a little bit deeper and it looks like, let's see, the actor here is Henry and the user is Johnny. So looks like he's somehow gotten access through Henry now and up levelled himself and this happened about 4 a.m.

So kind of before when all the events started happening, right? So Krista feels that strongly feels that Johnny has made all these configuration changes right after this very specific event. Now, this could be an insider threat event or that Henry stole or Johnny stole Henry's identity and somehow moved across different applications making these changes.

But whatever the case may be now, Krysta will look to figure that out after the very first stage of response that she's going to do now. And thanks to span Krysta can now take a moment to catch her breath. Uh collect her thoughts and take any necessary mitigation actions like blocking the user, shutting down the system, anything like that all from w Splunk? Awesome.

So cool to see all that data in there. Thanks. So mark you, right. Yeah. So you just observe how Krista used Splunk to detect a suspicious activity and identify right now, let me summarize the main features that help Krista to actually identify what's going on within the environment.

Very first one is data ingestion Splunk has a very long standing relationship with AWS. We have several integrations for many services and for AppFabric, we will lean on our existing integrations. The key component here is the Splunk add on for AWS. This application sits on top of Splunk and it helps retrieve all AppFabric data from S3 in a very efficient way using SQS in a very efficient way.

Now you can also get this data streamed into Splunk using Kinesis Firehose. So our analyst users can gain like very valuable insights as events occur in real time. Now, Splunk also recently announced an integration with Amazon Security LA. So any OCS F data formatted in Parquet or JSON is all available to Splunk and readable.

And apart from all this Splunk also integrates with Security LA Security Hub BPC CloudTrail CloudWatch. So users can have a very unified view of their infrastructure along with the SAS application logs and information.

Secondly, data normalization, Tamar touched briefly on this. Uh Splunk is also a contributor to CSF. We're one of the co-founders. But one of the very initial and popular Splunk ability is to read data schema on the fly. Anyone who's used Splunk has noticed this.

But with CSF, very optimized data storage and a proper data structure, users will not have to worry about normalizing that ingested data and they don't have to spend their valuable time on tedious tasks like building data preprocessing pipelines to normalize that data.

And with the recent add-on, our searches become more performant, visualizations become more reliable, just overall enhances the data exploration experience.

And finally, data analysis you just noticed in the demo how Splunk made it really easy, custom dashboards and everything for Krista to investigate the incident. Now, analysts can leverage these Splunk frameworks to build their own searches detections and things like that either with simple point and click or custom alerts and visualizations.

It helps analysts detect malicious activity really fast. Now for advanced users though they can hop on to any of Splunk premium products like Splunk Enterprise Security where you have very complex features like risk-based alerting, predictive ML based predictive analysis and notable events and things like that, that could effectively help analysts resolve incidents.

And also there's Splunk SOAR which is our comprehensive security orchestration and automation tool that helps analysts take actions, immediate actions to mitigate any threats, right? Things like that. And the SOAR product comes with over 400 plugins and our customers can use any of that, write playbooks on top of them, basically helps them resolve those incidents pretty effectively.

That's, that's what all you can do with Splunk and we've only scratched the surface, by the way, AppFabric users can benefit much more with Splunk, but over to you for now tomorrow.

Tamar: Thank you. I love that. I love hearing all of the ways that they can extract value out of it. And customers have already started using AppFabric and are telling us that they love the benefit it delivers to them.

The who are the customers that have started to use that fabric? Well, we're seeing a lot of success with customers that operate in highly regulated industries, financial services, healthcare. We're seeing customers in all sizes benefit from AppFabric from start ups, small and medium businesses and even larger organizations.

The customers are using out fabric are SAS first cloud first type of profile and their customers who have adopted SAS applications quite rapidly and are starting to feel the pain of not being able to monitor what's going on in terms of the data loss across these apps effectively.

For example, Bank Lomi is Israel's largest bank and they told us that our solution helped them free up capacity to handle more strategic work. But not just larger customers, also smaller ones. So a mid-sized financial institution that's using AppFabric has told us that AppFabric has helped provide critical data to monitor a activity that they didn't have visibility into prior in a super simple and frictionless way.

And it's not just financial services, but also healthcare customers are using AppFabric. And it's here important to call out that AppFabric is HIPAA compliant, a key requirement for customers such as Amazon Healthcare who are quoted on this page.

And so as we're getting close to end our presentation today, I wanted to just take a moment and recap what we talked about. We talked about how achieving observable over anomalous behavior that's occurring across your SAS applications is hard for a variety of reasons.

You know, the different data frameworks, the data pipeline management, the integrations, the normalization. We showed you how AppFabric addresses these challenges in a very simple way by standardizing these audit logs to a common schema that is security centric and taking away all that overhead that you don't have to deal with anymore.

And giving you a lot of choice and the format, the destinations and so on. And we also showed you that when you combine AppFabric with a supported solution like Splunk, security teams can extract a lot of value very quickly and very efficiently with minimal effort and to improve the role all security posture.

Getting started is super easy and very quick as you saw. And so I encourage all of you to get started by going to AWS console, creating an AWS account for free or using an existing one connecting your SAS applications without fabric by providing those authorization credentials and start benefiting from those transform normalized audit locks.

Basically catch us throughout the rest of the event. There's a, a lot more sessions that our team is doing including sessions about the generative AI powered productivity assistant. Please scan this QR code and add AppFabric to your schedule or come up after the session is over and ask me any question you'd like. I'm happy to reference and point you in the right direction and also catch us. We're at the expo hall, the business applications booth.

So please feel free to come and meet us there. Thank you for spending time with us this early afternoon and please please complete the session survey on your mobile app. We always strive to do better and improve and thank you.

Yeah, we have some time for questions before we wrap up, I believe.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值