Building security operations with Amazon OpenSearch Service

Good morning to all of you. We know it's the last day of the re:Invent 2022 Friday morning. We had a replay yesterday. So thank you so much for being here. We really appreciate it.

The session today is about building security operations with Amazon OpenSearch Service. So when you deploy Amazon OpenSearch Service, what is the security framework methodology tools is what we're gonna, we're gonna get into today.

My name is Manish Arora. I'm part of worldwide go to market specialist team for OpenSearch Service and we have Jimish Shah from the product management team and Kevin Fallis from the solution architect team. Today, three of us are going to share with you the entire framework methodology tools to see that we share all that with you so that we could deploy it and be, be ready to make it secure.

So with that, let's get started. But before I get started in this room, how many of you are using OpenSearch Service right now? Just for me to know. Excellent. Thank you. And how many of you are not using it? But considering, you know, that's fantastic. Thank you. And I'm sure the rest of you uh wanna know, explore and see all is OpenSearch use cases. How, how am I gonna use it? What's gonna happen? So we've got everything covered in this session for each one of you. And we are excited about the session from the agenda perspective.

We will look at OpenSearch and Amazon OpenSearch Service. So what is OpenSearch? And what is Amazon OpenSearch Service? How they related and separate as well? Then we get into uh how do you protect your data? Which is so important with when you deploy OpenSearch Service, we will share some customer use cases and the stories with you during the session. And we also have a couple of demos to actually show you, you know, just to give you the real feel how exactly uh some of these features cool features work and I will help us with that.

So with that, what is OpenSearch? OpenSearch is a community driven open source search and analytics suite derived from Elasticsearch 7.10 under the Apache 2.0 license. This is exactly as per the definition. This is what OpenSearch is all about. It is growing at a rapid pace. We have 1.4 million downloads within a short span of time for the open source OpenSearch. The partner community is growing with community projects growing every week. There are hundreds of contributors, thousands of pull requests, so many repositories getting built. So there is a lot of momentum when it comes to OpenSearch.

OpenSearch is a powerful analytics engine, especially for the people who are new to this right now. As as some of you shared in the beginning, it's a powerful analytics engine basically for two use cases broadly, search use case and analyzing the streaming data, use case.

On the search use case, customers actually use OpenSearch to bring in large amount of unstructured and text data and make their search seem less personalized by getting into that data for full text search. There are capabilities like natural language with relevant results from the analytics perspective.

Customers bring in large amount of real time data. They ingest real time data and they can store centralize, analyze this data. But with variety of visualization options, they can bring in observable data, log data, trace analytics data with OpenTelemetry, security analytics, anomaly detection.

So these are the the use cases for OpenSearch. Let's move forward to the Amazon OpenSearch Service. So Amazon OpenSearch Service is a fully managed service for OpenSearch to deploy, run at scale with security and reliability.

Customers usually choose Amazon OpenSearch Service because they want to do that heavy lifting so that their teams can actually work on extracting value from the data but maintaining, securing updating and all these stuff. They they leave it to AWS when they use OpenSearch Service.

Cost optimization is a part of it, which means you get various storage options. When you use OpenSearch Service, there are different storage tier options so that if you have data for long term, you can, you can store that and there are various other options to make it cost optimized all the time.

The OpenSearch Observability plug-in comes as part of OpenSearch Service. Now, what that means is that when you buy, when you deploy Amazon OpenSearch Service, the Observability plug-in comes as part of it, you can start sending your Observability data when you set up a OpenSearch Service domain or a cluster. And once you've started sending the data, you can start analyzing that data.

Today's session is security and I'm sure you're all here for that to, to hear that. And as I said, it's a, it's a level 300 of security framework of conversation, which means we'll really dive deep into it.

Security is day zero. As far as OpenSearch Service is concerned and we are going to get into the security framework methodologies and the tools with Jamish moving forward from here, Jamish.

JAMISH: Thank you, Manish.

Hi, everybody. My name is Jamish Shah. I'm a senior technical product manager on Amazon OpenSearch Service. I'm really excited to talk to you about some of the security capabilities of OpenSearch Service that you can use to build out security operations.

So let's dive in. I want you to think about what are your security requirements. As you can imagine, every organization has very different, very unique security requirements.

Do you have confidential business data that you need to protect? Do you need strong data, access control features that can help you to give different sets of users limited access to your data sets? Is it important for you to audit your user actions so that you can monitor the trends and see what type of queries that your user are sending to your systems? Is data encryption important to you? Do you need to protect your data when it's sitting at rest or whether it's moving in transit?

Do you need advanced security controls? So we have customers who would like to protect not just the index level data but also at a document level or a field level where you want to protect subsets of documents or certain fields, you want to exclude them from your searches. So you need some of the advanced security capabilities that we'll be talking about through the session.

Do you have an existing identity provider? Like a SAML identity provider or an OpenID Connect? Or IAM that you need to integrate with because your users are already using this today with all the other applications that you have. So wouldn't it just be easier for you to integrate that with the new solution that you're looking for?

Customers often tell us that compliance is a key requirement for them to looking for a solution that is compliant with HIPAA, SOC, FedRAMP and many other compliance regulations.

And lastly, is it important for you that the solution that you're looking for provides you with backwards compatible security patches? So you're not forced to upgrade and you have the peace of mind that somebody is going to look at the security issues, find a patch and provide a patch for the for the version of the solution that you're running without disrupting or minimizing your downtime for your, for your business functions.

As we'll see through the rest of this presentation, we'll talk about many of these features with Amazon OpenSearch Service that can potentially help you to check all these boxes.

So let's take a 10,000 ft view of security. The best way to think about OpenSearch security is defense in layers. Let's talk about five layers today.

The first layer, as you can see here is the network policies layer. This is where you would define the AWS Identity and Access Management or IAM IP based policies. So you can restrict the type of traffic that gets to your, to your clusters, to OpenSearch clusters. You can also define, you can also define network policies, routing policies in order to control which traffic gets to your cluster.

The second layer is Virtual Private Cloud or VPCs. VPCs are like traditional networks in your data center. But these are built in the cloud and they're built for scale. And VPC's also provide you with many advanced capabilities to filter and control what type of traffic gets to your OpenSearch clusters.

The third layer is authentication and authorization. So this is where some of the other IAM policies come into the picture. You want to, you want to protect your, your data, but you want to also decide which users can reach your cluster and what actions that they can take on your cluster. So you can define some policies. We'll talk about that in just a second.

And now we are going much deeper into these layers of security. So now we're talking about fine grained access control. That is an advanced security capability available with OpenSearch Service that can help you to restrict access to specific data sets for different users, accessing the same cluster, you can do field masking and there's just lots of security capabilities that we'll be talking about.

And the last one is data encryption. You have all these security capabilities. But it's very important if you want a good security posture to protect your data, to keep it secure with its strong encryption keys.

So these are the five layers of security with OpenSearch. But the five layers are important. And let's look at now what are the four pillars, the four foundational pillars of security with OpenSearch.

The first is data encryption. So data encryption, when you have your data into the cluster, when you bring your data into the cluster, when your data is at rest or you're querying or a cross cluster search or the data is moving between nodes, you want to make sure that this data is always protected. And we do that by integrating with AWS Key Management Service and use KMS keys in order to encrypt and protect your data.

The second one is authentication. This is where IAM comes into the picture. OpenSearch integrates with AWS IAM. So you can have if you are using existing services within AWS and you already have IAM principles, you can use the same principles. They define a new set of actions that they can or cannot take on OpenSearch within those policies.

The next one is authorization...

So you have authenticated users logging into your cluster. Then what, what actions can they take? Now we're going deeper on what documents they can access. Can they see certain PII data or should we need to mask it? This is where things like fine grain access control come into the picture.

And the last one is audit and compliance. Uh we talked, we touched upon this a little bit on the previous slide, but it's important that, you know, we can, we give you the features that can help you to meet your audit and compliance need to track the usage of your cluster to track the queries coming to your cluster, to track how what type of queries your users are sending to your cluster.

So over here OpenSearch integrates with Amazon CloudWatch and AWS CloudTrail where we send different types of audit logging data and we'll be talking about these features shortly.

Now, let's talk about the AWS security controls. So we looked at these five layers of defense. Many of these features are available as part of your AWS ecosystem for you to use.

So the AWS IP based policies, if you're not using VPCs, if don't use a public facing domain at a minimum, we recommend that at least have an IP based policy to protect your data and your network.

The next one is VPCs or Virtual Private Cloud. As I said, these are like traditional networks in your data center, but with a lot more security capabilities and built for scale in the cloud. VPCs have features like security groups where you can limit the traffic at an instance level. They have features like network access control list where you can access like a firewall, where you can control the inbound as well as the outbound traffic.

And we also have VPC flow logs where you can monitor the usage or the network usage and network trends for better understanding of how your cluster is being used.

And the two other policies we have is IAM identity policy and the IAM resource based policy. This is where you would use your IAM principles and provide a list of actions that they can take. Similarly in the resource based policies, you can provide what actions can an authenticated principle take on this cluster.

So we combine all of these policies to provide you with a net security posture that helps you protect who can access your cluster.

So we talked about the first three layers of defense. Now we're going deeper into the two layers, the two deeper layers of security.

So fine grain access control is a suite of advanced security capabilities that has many features to help you protect your data. It has a lot of granular data, access control features.

Find and access control also enables you to create and provide different types of identities. So we can manage a basic authentication identity, which is nothing. But if you have a few set of users in your organization, you can just create local identities on the cluster and you can manage them.

Or if you have an SAML based identity provider, then you can integrate that natively with OpenSearch Dashboards. You can use OpenID Connect with Amazon Cognito. You can use IAM as well as you can integrate with the LDAP servers using an ADFS or the Active Directory Federation Service.

As you can see, there are many different types of identities that we can support.

Next, we also support cluster level permissions and index level permissions. So your data is in the cluster. But how do you give certain sets of users privileged access? For example, you may have to give your admin saying, hey Mr Admin, you should be responsible for monitoring the health of this cluster. You should be responsible for taking snapshots of the data on this cluster. So any cluster level actions are defined under the cluster permissions and similarly any index level actions.

So when the data comes in to OpenSearch, it is stored in an index. So you can have actions like who can write to a certain index, who can read from a certain index and similar other actions.

We also have document level and field level security. So within an index, there are many different documents, you can have defined policies where you can say if a user tries to query an index only show the documents where the user field matches the user ID of the user querying it, right?

So you can have dynamic attributes as well as you can have static attributes, like only show this user fields that match a specific value. So just imagine the beauty of this - they are different users logging into the same cluster and they have access to similar data sets. But their views are very, very different. What they can see is very, very different.

Then we have field masking. So sometimes it's important for you to share with your users that yes, we have this data but you don't want to share the values of the data. You don't want them to be able to aggregate on those values. Like this could, this could be information like PII data or any sensitive information or information that auditors need to see as we will see in a, in a demo that's coming up by my colleague, Ana here.

And then we have audit logging and compliance. So we have multiple levels of audit logging for your compliance needs as well. Where if your users are sending any queries to the backend APIs those queries are automatically well, not automatically, but you have to enable the feature. But once you do those queries are logged and you can see who your user is, where did they send the query from the host? And it just a ton of details around that query.

So all of this information can be logged to the Amazon CloudWatch Logs.

Now, all of these advanced features are built on some foundational security concepts. So let's walk through some of those concepts.

So users are nothing but you know, authenticated principles or authenticated users who have the valid permissions to access your cluster.

Permissions as we talked about earlier are different types. So there's cluster level permissions, we have index level permissions and then we also have some plugins with OpenSearch, you know, like alerting anomaly detection. So we have plugin level permissions.

So there are different types of permissions on the cluster. When you combine these permissions into a group that becomes a role, right? So you can create a role that says I have a few index level and a few cluster level permissions. And then when you and you can assign that role to your users.

So your backend roles are basically roles that where you're looking for some user attributes of your external users, right? So for example, if you have an SAML identity provider, when the user is logging in, we we know some of these user attributes like for example, what work groups does this user belong to? What is the user's email address and things like that?

And we can use these attributes to automatically map users to these security roles that we created in the previous step, right? So we'll see in the demo, how easy it is for users coming from an SAML identity provider to just be automatically mapped. And how easy the management of this whole security permissions is.

So we talked about role mapping. That's basically the connection between a user attribute coming from your external identity as well as a predefined security role.

And lastly, we have dashboard tenants. So you may have different types of user personas or different application teams logging into your cluster and you might want to create customized views for each of these sets of users. So dashboard tenants helps you to create isolated views of the saved objects like the dashboards visualizations reports for different types of users.

So it gives you that isolation when different users log in. And I've been talking about all of this, but we're going to see this in the demo shortly.

Now, let's make this a little bit fun. Let's talk about a large e-commerce customer use case that's based on true events from talking to customers such as yourself.

This customer was bringing hundreds of terabytes of e-commerce order transaction data daily into their system, right? Things like customers ordering for some clothes or you know, whatever things they are shopping for. This data also includes some PII data. So there is some information about the customer's age or an email address or a date of birth in that in that data set.

This customer also had an application team whose job was to detect any potential fraud transactions from this order, transaction data. They also have a compliance and a team whose job is to come in and look at, you know, what data is being captured. Are they still meeting their compliance needs? Look at dashboards reports and things like that.

And lastly, they are using Octa as an SAML identity provider because they already have many other applications that their users are logging into. And they would just like to use the native integrations SAML integration that we have with OpenSearch Service.

So let's take some goals for this use case. What do we want to show you in this demo?

So let's create some data access boundaries. We see that there are different types of users here. So we'll need to create some boundaries for, for each of those.

We'll have to restrict permissions to specific data sets. You definitely don't want your auditors to be having access, full access to all your data sets.

We also want to automatically map the users that are logging into the cluster to these predefined security roles. If you're a large organization, you may have so many different users belonging in the same workgroup. And just imagine how easy it would be if, if let's say you have, you have a work group like administrator and you don't have to manage users, you just manage the work group and say any user with the administrator workgroup automatically assigned the security role, right?

So that makes your management much easier. And we, we, we show you guys a little more on that.

And then we'll be also talking about the audit usage patterns. We'll show you a sample of the audit logs where you'll see what types of queries the users are sending in and see a little more information that we put in the audit logs as an administrator.

I would need to build three types of user personas to meet these goals. The first one is the DevOps administrator. Their job is to validate and monitor all the health sets that we have as well as they should have full access to this data set.

Then we'll have application users. As we talked on the previous slide, they have an application team to detect these fraud transactions. Do these application users have access to a limited data set? And in this case, they only have access to US orders, right? The ecommerce orders only in the US.

And the third user that we need to create is an auditor who is a read only user for a compliance use case. They come in, they look at some dashboards, some reports and they make sure that the system is still compliant with the with the organizational compliance needs.

So with that, I would like to invite my colleague, Ana to come up here and show you a live demo of all of these security features.

Thank you, Jims.

Now for the fun stuff, right? You are, how many of you are really here for the demo? Really? I think you are one of the few who escaped the replay hangover, right? We were a full house yesterday and here you are.

Hi, I'm Ana Govindraju. I'm an OpenSearch specialist architect. I'm part of the worldwide specialist organization and we have a fantastic demo for you here lined up.

So today, what we are gonna show you is how we would secure your dashboard for the three personas that Jim has just spoke about. But before we get there, right, there are a couple of things that I have done ahead of time because it's a long process and we would like to also give you a quick overview of the capability.

Some of the things that Jimmy spoke about, right? We have created, we have spun up an OpenSearch cluster within a VPC. And what we have done, we have enabled SAML authentication. When you enable SAML authentication, you are given a bunch of SAML URLs. We have taken those, the URL, and we are leveraging Okta as the identity provider to configure these URLs in Okta.

In Okta, we have created an Okta application and within an Okta application, we have configured these URLs. And then we have created that handshake. So now the authentication handshake is done. So if you want more information about the various steps that's involved, it's a 30 minute presentation and you can stop by our OpenSearch booth and you can have one of us walk you through the various steps involved.

So today, I'm gonna show you with these three personas that Jimmy just spoke about, how we would secure those dashboards. What are the features which comes with the Security plugin? How many of you are actually using SAML here with an external identity provider? Okay, that's quite a few.

Okay, let's get into the demo. So here I've logged in as an admin. So when I went to log in here, I have to log in as an admin. So when I log in as an admin, I'm a cluster admin which means that I have all access to the cluster. Plus when I'm enabling SAML, I'm going to also say admin as the SAML master, which means the user has access to the Security plugin which is part of your dashboard.

So once you get into the Security plugin, there's a bunch of features and of course, in authentication, like I said, the handshake that you have created between SAML and Okta, you can see it right there. And this is the metadata that Okta would generate based upon the SSO URLs that you feed in and this metadata you configure it back into your cluster.

So once that is done, let's look at the roles that is created for this demo. Jimmy spoke about the three personas - the application user, the external auditor and the dev ops. So what roles these three personas have?

Here these users, the roles is a place where you actually map the role information to the cluster permission, index permission, and of course, any backend roles that is coming from Okta. So here this is a kind of high level view of what cluster permission they have, what index permission, and of course, the backend roles that is coming captured from Okta.

So let's look at the application user role. The application user is a generic user. He is going to be an internal user. The user has read access to both the cluster as well as the index. And apart from that this user has access to only US documents.

So here we have enabled document level security, more fine grained access control is enabled for you here. So the query basically when it fires it gives a bunch of documents which matches the IO code country code US and that's pretty much what application users have access to. And that's about the restrictions for this user. There's no other restriction and this user has access to the global tenant.

Now let's cancel out and I will show you one more mapping that we would need to do for this user, which is the mapped user. The mapped user is where you would say what is the backend group the user belongs to. So when I'm gonna log in as user 1 or 2 or 3, I need to have an Okta group associated with it. And this is how you do the handshake, you map what roles, what permissions that the Okta group users have to your cluster.

Now this is group 1. So let me move on and show you the other role. So the auditor is an external user. The auditor is an external user. So we have to limit certain access to the user. Let me edit the role. So this user like application user has read only access to the cluster as well as the index. Apart from that, this user, of course, can see only US related content.

On top of that, we have enabled field level security. This is an external user. We don't want to leak any customer information. So we have excluded all customer related fields for this particular user. So this is pretty much what you do. If you have a pattern of fields that matches a condition, you can give all the list of patterns for which you need to enable field level security.

And the last is the anonymization. We have a bunch of fields that we want to say, okay, we do have those bunch of fields, but we don't want to show the content. So when we want to mask certain content from the user's view, we leverage the anonymization feature.

So here the email addresses would be replaced with an alphanumeric character set. So he's tightly secured. So there are a lot of restrictions for the auditor and the auditor has access to the global tenant. And of course, when we look at the all the users here, you can see they are all mapped to some kind of a backend Okta group that's again based on your spec.

Now last comes the dev ops. He wants to create or create and delete dashboards, visualizations. So the person has access, read/write access to the entire cluster. So as you can see the cluster permission and the index level permission are all read/write accesses. But other than that, there are no restrictions which is placed for this user.

Having seen this, let's quickly look at how we would also enable... I will, I'm going to log out and log in as three different personas. So the application user being in group 1 is user 1, the auditor being in group 2 is user 2, and the devops is user 3.

So I'm going to log in as the application user and I would see, I would verify the role, the backend role is the application user. And now I go and view my dashboard. So as you can see since the application user has access to the cluster, read access to the cluster but can view only US related content, the documents, this particular... So this dashboard actually is a sales order dashboard. It gives you a concise view of what are the sales per country, what are the sales per gender and region and an average sales price.

So as you can see everything on this dashboard, everything on this visualization actually reflects only the documents for the documents for which the user have access to.

Now moving on. Let me log out and log in as our external user auditor, we just use user 2. And the user 2, the auditor, external user has similar access to the application user in the sense that the user has only read only access to the cluster and has access to only US related content.

Apart from that, as you remember, we had enabled field level security to remove all the customer information. In which case any dashboard, any visualization that is reflecting customer information is removed. And apart from that, there was one more thing we also did is mask the email field.

So if I were to go, as you can see, these are the fields which is in my index right now, I mean, these are not just the only fields, but these are the fields that the auditor have access to. As you can see the customer_fields do not exist. And when I click on the email, it's completely masked, is replaced with alphanumeric character set.

Now I'm gonna log off and see how the devops has his vision. So the devops user has all the read and write access to the cluster. So the devops user can see it all. So as you can see you have all the country information which is visible to the user and also the visualization for the customer information.

So when the devops user actually were to review what are the fields that he or she have access to, so that includes the customer information, which is the customer last name, full name and all of that. But note, even though the devops user do have access to complete access to the cluster, the devops user do not have access to the Security plugin to change or add or remove rules. And that comes only with the SAML master admin user.

So let's go back and log in as the admin user. So here the user has access, I'm back as an admin and I have access to the Security plugin and I can add remove roles. I can look at what the roles are. I can add permissions, remove permissions and I can bring in new Okta groups and then there's the other thing.

So permissions is something we saw how we give permissions to the user. And we have selected a bunch of permissions we have said, okay, this particular person has right to access, read access. But what we can also do is we can create action groups and we can group a bunch of permissions together and have your action group mapped to your roles as well.

So that's an option. But these are all out of the box permissions, which is provided with OpenSearch tenants. It's again a fantastic concept of segregating your dashboards, of making sure certain set of users have access only to certain dashboards, certain visualizations.

So you can actually separate out what the users can access and cannot access. And last uh Jimmy spoke about compliance. So audit logs is as easy as just enabling a toggle on your, on your system. So once you enable the toggle and you have your request body enabled, you can see your um your logs coming in. I somehow thought i had this on. But just a second, i can quickly show you where we can look at our cluster, the security demo cluster, which is created inside the VPC. Again, the complete steps to spun off VPC based cluster along with how we enable the SAM l authentication as part of our demo in the booth.

So here i'm inside the cluster that i've created for this demo. And there is a log tab. When i go to the log tab, there is an audit log right here, which is enabled right now. I enabled it as part of the demo. But if it's disabled, make sure you select and click on the enable button. And once you start do within minutes, you would see your audit logs flowing into cloud watch metrics. And i'm clicking my cloud watch and i can quickly show you the set of audits that are flown and these are based upon our accesses. So this is how we track accesses to the cluster.

So we come to the end of the first demo. So see you for the second over to you, Jim.

Thank you, Ana for that insightful demo. So we were able to show you how you can create customized data access for different users in your organization. I don't know, i was also able to show you how you can create personalized views. When you looked at the dashboards, they look very different for different users. This is just a sample of the capabilities and you can definitely do a lot more with them. We also talked about restricting permissions to specific data sets. So if you have indices and you want to protect your certain documents or you want to mask certain data, you can do that with the advanced features that we have. And she also showed you and also showed you that when you log your users are logging in from your Saml identity provider, we automatically map them based on the user attributes in this case of a group to the security roles that you've already defined. And lastly, we also showed you how you can analyze audit logs in CloudWatch logs and look at the user activity on your cluster. We also have by default a enabled AWS CloudTrail audit logging. So when you create your clusters, any domain configuration changes, for example, if somebody is added a new instance or changed the instance type enabled a new feature, all of that information is automatically audit logged to your AWS CloudTrail and for data encryption, um we use AWS KMS keys to encrypt the data. You can either bring your own keys which you've created in AWS KMS and you can provide those when you enable encryption or we can create the keys on your behalf, store it in your account and use derived data keys to encrypt the data. You might be wondering what data is encrypted. We encrypt all the data in your indices in your OpenSearch logs, swap files data that's in the automated snapshots that we do on an hourly basis as well as all other data in your application directy. All of this is encrypted with these encryption keys. In addition, we also encrypt the data that's moving between nodes. So when you do a search that requires the data to move between nodes that is also encrypted, you also have those options. Manish.

Yep. Thank you. Thanks a lot Ana. So we've talked about security features, security layers um and also showed a cool demo, how it could actually be done and how different permissions are given. Now, some of you might be wondering that all this is great, but how do i do this? And what all should i do this? I mean, unless some of you have already done it, especially some of those who are considering right now in the process of evaluating. Um so let's put this into three buckets uh based on our customer conversations. That's what we've done. So the first bucket is which we call as the minimum, which means you have HTTPS encryption rest some of these already enabled for you for the security when you set it up. So that's the minimal bucket. We have the second bucket which we call as the basic bucket. Basic bucket means it's, it's actually built on top of the minimal, but it has encryption in transit im identity policy, VPC. Some of those things which you have already heard from uh Ana and Jams and then we have the advanced market which has Saml which has uh you know, fine access control. It has got field level, document level security all set up. Now, this is purely directional. It's not that you have to do either or, but based on what we have seen customers doing it, uh this is how we have seen there in three different buckets. Now, just for me to know does any of you relate with the minimal one here, this is the one which i've done and i'm secured, my data is good, right? And any, any, any of the, any of you who have actually done advanced or you could relate with it? Great that that's there you go. And there are customers who have also done the basic piece. So depending on your use case and what you're looking at, uh you can actually use one of these buckets. That's the whole idea. Now, how do you do that to enable it? You can actually do it through a ws console or through OpenSearch API s. Yeah. Yeah. Um and there are different ones you can choose whichever you want to. So there are so we have customers who are using select group of customers here who are actually using some of these features and other features of Amazon OpenSearch Service to actually leverage the service here. And as you can see, they are across verticals, they are not really isolated to one or two verticals. They're across this now from here. Let's go to the next level, which is one of the new exciting things you are doing with the OpenSearch services when it comes to security analytics.

Yes, thank you. Manish. Security breaches are happening all the time. It has become so common to hear about data breaches every single day. Have you heard about a major airline that confirmed a data breach that allowed an unauthorized actor to gain access to confidential and sensitive data about some of their employees and their customers? Or maybe you heard about a major oil pipeline that had, that was shut down as a result of a malware ransomware attack or perhaps a major rideshare company that was impacted as a result of a social engineering attack. So data breaches have become the norm. All you can do is to protect your data, to have the right mechanisms in place to detect threats and potential issues in your network. The amount of data that we see today has explored in the last decade exponentially, right? We collect application logs, user logs. There's just so many interconnected devices generating so much machine generated data. It's it's unfathomable and it's even more important now because you have so many devices generating so much data. It's even more important for you to remember that there are many different entry points for attackers into your systems. Now, this means that you need to have the right tools to detect and respond appropriately to these threats.

So with that, i'm very excited to share with you a new security capability that we recently launched an OpenSearch project. It's not yet available on the service but will be available soon. But this security analytics capability allows you to monitor, detect and respond to potential threats. Currently, this feature is an experimental feature in OpenSearch project as we're looking for customer feedback. So i highly encourage you all to go ahead and download this. It's available out there. We just launched it a few weeks ago, try it out and give us your feedback and what you would like to see with the security analytics.

So we'll talk about some of the key requirements for a good security analytics solution. But before that, if i can do a quick show of hands on how many users here are currently using an SIEM or a Security Incident and Event Management system in their organization. Thank you. I see quite a few hands. Thank you. So SIEMs were built for on premise uh infrastructure and they had a fixed set of rules and fixed set of things that they could do. But the security analytics is built for the cloud. It's agile, it's scalable. You want a system that can help you to stay on top of the adversarial techniques and tactics. You want the abilities to easily update and add content on the fly because attackers are changing their techniques every single day with even more sophisticated use cases, you want to integrate with external threat intel feeds. So you can always stay on top of the malicious ip addresses, malicious hosts, file hashes dark net traffic. There's just, there's just so much, so many things out there. Also you want to enrich with contextual information. So if you have users um that are let's say about to leave your company, you monitor them closely. And what if there's any data exfiltration, for example, so you can match that to a list and see, you know what these users are doing. And obviously one of the most important aspects of a security analytic solution is the ability to detect threats as well as hunt for threats on your data. So these are some of the key requirements for a security analytics solution that we have in mind. But we are taking the first step with the security analytics in the OpenSearch project.

So I'll quickly go through some of the features that we have with the um the feature that some of the capabilities we have with the feature we just launched.

So we have prepackage 2000 plus security rules out of the box. These are open source sigma security rules. We are supporting eight different log sources out of the box like windows s3 access logs, dns, logs and you can customize thread detectors for these logs.

So if you're bringing in windows locks, you can use these prepackage rule sets to apply to those detector and they will automatically run in the background and generate security findings if that matches a certain condition.

And when you go through the findings, the findings of a lot of information about what's the document or what's the condition that triggered the security role and give you all the information you need to take your investigation further.

You can then also create security alerts on these findings. So that next time something like this happens, you are able to fire an alert and get a notification or slack or a chime or something like that.

And we also give you a rules builder to copy the rules and create your own rules. And of course, we have dashboards where you can create custom dashboards to get quickly to the insights that you need for your security investigation.

So with that, I would like to invite my colleague Auna here to come up and show you a live demo of these security capabilities. Give you miss you can have that. Ok? Thanks now for the cool stuff.

So this is the first ever public demo of the security analytics. So let me switch over to the, ok. Uh I have um sorry about that.

So the security analytics plug in is available as part of your opensearch dashboard. So when you go download the 2.4 version, the open source open search, you would see the security analytics as part uh part of it.

And uh there are multiple concepts entities that uh that jimmy just spoke about. So we're gonna see them all in five minutes.

So first we start with the rule, there are 2000/2000 prepackage rules that jimmy spoke about. They are under the rules section. These rules actually is mapped 1 to 1 with log types.

There are eight different log types that we support today. So when I click on the rule types, these are the set of logs that we support like network logs, dns logs, apache, windows logs, lda logs.

So these are the eight logs and we are still building on top of that. And the 2000 rules that we, that you are seeing right now on the screen, they map 1 to 1 with the logs.

So when we are ingesting, say windows logs, the system exactly knows what are the rules to bring in for windows logs. And the next is the rule severity again, this is all of this is pre packaged available as part of the system.

We encourage you to go and try out the new 2.4 security analytics. And so that's critical high and source like jim is just mentioned, they are sigma rules that uh that we have used to load these rules, but we can also create our own custom rules as well.

Today, i'm gonna show you how to build a detector. So what is a detector detector is where we set up um an entity for you to say, ok, i'm going to ingest logs a windows logs and i want to track certain rules out of the windows, logs and keep a monitor on it.

Keep a type check on it and alert when there is an event which occurs. So let's quickly create a sample detector.

So here, let's call it sample detect and let's give a description. And here we would get to choose what is the uh this is the name of your index. Uh however, you are ingesting your log files, your your system files.

So today i have created a windows and an index by the name windows. And the thread detector type are like the, you know, are 1 to 1 mapping with the eight different log types that we just spoke about.

So let me choose the windows logs. So when i choose the window loss, there are 1575 rules that are um that pertain to windows locks.

So here by default, all the rules are selected, but then you have an option to choose what rules should fall under a specific detector and move on.

And then you have a detector schedule. This defines how frequently you want the detector to run against the index. Let's go to the next time.

This is a very critical uh page when we create a detector. This is where we say these are the fields in my, in my index that i want to be monitored that i, i want to be clocked or that is related to the rule field.

So here i would say i want to uh so some of the fields that i have on my uh index are these and i'm gonna map them quickly to to this, provide a name and then service name.

So these are the fields which are already there part of my index and then i move on and i create an alert.

So alert, you can either create it at this page or you can go look at your findings and you can also create an alert on your findings page. We will create as part of a detector.

So you can actually say what rules i'm just going to yeah, so what rules these um these alerts needs to be go off. So i'm gonna, you know, not mention the rule name but say i want the alert to be triggered for every critical rule.

That is part of this particular detector and then alert and notify again, like what jims mentioned, i can actually create this, this tool actually leverages the notification service, which is also part of your uh notification plug in, which is also part of tool 0.4.

Um and i can actually send the alert information to email, to slack or chime or any messaging service that you're internally running as long as you can get a hook and then moving on. And that's it.

It's as simple as creating a detector that's all set. And now this detector will act upon your windows index that is part of your system and it's going to generate finding automatically.

And if it comes across an event which has occurred, which matches the uh which matches the section that you know, we said, ok, this is, these are the rules that needs to be fired. If it matches, then it's going to generate a findings for us.

So this is the alert page. Let me go back and look at the finding. So uh we have uh prepopulated some findings. We have prepopulate, i mean, we have created a detector earlier and we have a bunch of findings as part of that. Let's quickly look at that.

Uh here you go. So these are the findings of um that has been uh based on the events that are, that have occurred on your data over the last 1616 to 10 days. And then um this is auto generated again based upon your detector rules.

Finally, we come to the overview page. This gives you a quick overview, a quick view summary page. It gives you what are all the findings that has happened over the time? And what are the recent alerts and the recent findings?

And what are the most frequently occurring uh rules detecting rules and what are the detectors that have been created? Thank you very much. Sorry. That has to be quick and over to you, Jim.

Thank you, Ana. So what are you waiting for? We've just shown you how easy it is to investigate, detect and monitor potential threats in your organization.

We also talked about how you can encrypt your data as well as use fine grain access control to manage the various policies for your data and your organization.

We also talked about audit and compliance needs and what tools you can use to meet your organizational requirements. And lastly, we've talked about a comprehensive suite of security features that you can use to build security operations with Amazon OpenSearch Service.

It seems it is very easy to start, isn't it? So we are quite excited about this. I hope you are excited about these features too. It's actually easy to start setting up the cluster, especially for some of those who are actually looking at new.

It's very easy to uh to start the cluster for service. You can go and you know, within a few minutes the cluster is up and running. There are tutorials to actually guide you and to tell you how exactly you want to go about it.

And there is enough and more help from a account teams and other teams to support you in that one. The qr code here actually takes you directly to the security page and some of those things that we have talked about today.

So keep your, keep your uh yourself up to date. These are some of the resources which can actually help you to stay, not only up to date but also continuous learning, public road map documentation, whatever is going on.

So that you know, at any point in time, where are we, what are we doing? Which direction are we leading? Are we going to one thing which i want to highlight is the contribution to opensearch projects.

So if you are interested, please go to the opensearch.org and start contributing. Let's make the whole community thrive with that.

We come to the end of the session just in time. Really appreciate your time once again on a friday morning. Thank you so much for being here. Hope you enjoyed the session and found it useful. Thanks a lot.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值