Transform data investigation via Elasticsearch Query Language

One. Welcome to this afternoon session. I hope that we can keep you uh excited still, even though it's an afternoon. My name is Nina Sla and I'm here with James Pari, my colleague from security team. We're going to talk about ESQ - how it's going to help us transform data investigation.

I'm super excited to be here and uh talk about ESQO. It's been a long journey uh that has been enabled by our amazing engineers in Elastic across the whole company. So and the journey is just getting started. We're super excited about it.

Before we start off. Uh can you raise your hand? How many are using Elastic today? Half half of you. So I hope that you will see the benefit of ESQ and the rest of you hope that we can inspire you to go out and try it out. We have a public demo site, you could go and play or you can set up a cloud account.

So if we start by saying like what one of the challenges we have today is that we have a mountain illustrated uh up here of unstructured data coming in. I think the statistic is that 90% of data is unstructured and we also have a lot of customers that a re ingesting data that is in all kind of formats. We want to enable our users to sift through this data types, logs metrics. Uh you na me it events in a much better way than they are today.

Um we are having a very powerful um u i driven work flows uh in Kibana analytic today, do it to access this data. Um however, however, we know that we are on boarding a lot of users that are accustomed to more expression driven work flows and also um um a customed to our p created languages. So that's why we are uh investigating investing in your girl. Sorry about that.

Uh before we dive in, uh ere uh I'd like to talk to you about some of the challenges that we have today. Um we have a, a challenge of when users are accessing the data, they need to go to several places. Um several methods to use to be able to enable um get hold of the logs, uh get hold of the matrixes. Um and sorry about that. That was true. Yeah, so our user base has grown over time from application developers to sres to thread hunters. And the sa me has their data types. This has evolved Elastic to actually develop several uh query languages to access the data and uh sort of to also to organize your data uh via schemes, schema files and transforms and so on.

Um and we are really focusing on shorten the time to insight and shorten the time means to resolution. And that's why we want to shorten the time data for users by enabling them. Sorry. I'm yeah.

So what i want you to do is imagine having everything in one place you can search, you can calculate, you can transform, you can filter, you can aggregate and visualize all from one screen. Then imagine a next generation search engine that are capable of faster search while delivering a comprehensive language for data investigation and exploration. And the last part is imagine a way for every individual to access data, explore it and visualizing it.

So introducing ESQL - turning your data into gold.

When I talk about, yes, I see it as three things kind of like a kinder surprise. First thing is we have a very, very powerful query engine that improves efficiency and delivers query speed with E query engine searches aggregations are executed in multiple stages concurrently for greater speed and efficiency.

Second of all, we have a new pipe query language that makes it easier to transform your data, simplify the data investigation and unlock insights for your business value. It enables companies to be able to do aggregations at the same time as they do in visualization. In 11 example could be that we have some flat log messages that you are grogg or passing on and then doing aggregation on top of it, James will laid on a demo how that look like and how, how, how it's going to help our users shorten time insight by doing several things at once.

And the third kind of surprise is unified query experience. So this will be enabled in Kibana across across Kibana, both in security and observable. And you will be able to query aggregate and visualize your data all in one place. So search rules alerts, you name it EPR will be the way you define and search, search your data.

The first is a distributor query engine. So we are launching, as I mentioned before, a new distribution and dedicated query range where its purpose is to help our users expedite data handling allowing for a quicker and more responsive analysis. The end point is underscore query and it's there is no translation or translation like we have today. So it's directly to underscore query.

A queries are parsed and optimized for distribute execution and it operates in blocks instead of one row at a time, it takes advantage of specialization and multi threading. And we the benchmarks are actually showing that ESQO is outperforming ESL. In many instances, we have a public dashboard on E called performance that you can follow our journey on. How is it, how is it progressing? There is a link in the dashboard so you can access to it when you get the presentation and it shows that it is faster than our current aggregations in some case, even without any many optimizations.

What the chart shows here is a query that is on a new york taxis where we have a simple aggregations of the total amount, average, total amount group by passenger count. And that we are sorting by the passenger count. The pink line there is multi threaded ESQO, the blue one is single threaded and then we have the green line which is our current on grow search. The lower the better that's in milliseconds that we have. It's quite impressive.

The second is of course, the language itself. When you write a ESQO query, you start off with some source commands. So it's a serial commands that are chained together with pipes. A source command retrieves and generate data in form of a table. So if you take a a from command, it will generate a table uh from the you retrieve it from Elastic search illustrated by the icon here.

And you would take that table and actually use some processing commands. So a table as an input produces by the previous source command and produces a table as an output example, you use a evolve in the in in, you can use a valve for that. You can change the processing commands separated by a pipe character and each plus commands works on the output of the previous command.

I just set up a table some of the press and comments that we have uh ESQO already today, enrich is a look up and i think uh James will also demo that later where you can actually enrich a data set utilizing a look up. Uh eval is also of course uh transforming your data in, in the way you are, you wish.

ESQO has a comprehensive set of functions and operators working with data. So far, we have these categories um where, where the most popular is of course, the aggregation function uh that can be used together with stats command. But otherwise we have like the date format and there is a very comprehensive documentation you can go in and, and uh look a t it to see what can we, what a re you capable of doing with the ESQO career language? And does it fit your needs?

So we, so how does a ESQO create look like if we take this example for instance, then we have both the query and the result representative on the right. You see the query that selects data from apache logs, then filter filters for a log and related to the logging page. And then b is these entries in the specific time intervals, counts the number of logins for each user in, in, in those time intervals and then outputs the results uh sorted by the highest number of login attempts.

This produces both a table form of the results and also visual visual representation. We will showcase that later as well. But yes, we all it's expressive, powerful, composable, extendable and fast.

The third kind of surprise is the unified user experience and how are you going to interact with the? Yes, so ESQO enables users to do data exploration, data visualization all in one and data transformation. What we saw here, what what we're showcasing here is discover. We actually have the query and you are having the visual representation and tableau format.

We are saying like ESQO doesn't just show uh data, it brings it to life. It will not only be available in discover, it will be available in security timeline. You can do alerts with ESQO as well and it will be accessible through the a i assistant.

So we have a i assistant that can help you out on writing your ESQO queries in particular use case. So you can ask them questions and how can i investigate these uh the these data types that is really, really cool feature, especially if you're new to a pipe curry language, you can get a lot of help there as well because we are feeling a bit about comprehensive documentation.

So when we start building ESPL, we had set up a private beta program where we have a lot of customers contributing to it to us and actually coming in with some inputs and we talked to a lot of users, one of them was Informatica.

So, Informatica is a company that is specialized in data management and integration. It's an enterprise cloud data manager management leader and they provide software and services to help organization organize process and make sense of their data. This includes a task like data integration, data quality, data governance and so on.

And they actually have a booth here at AWS Reinvents. So if that's something you need, you can actually go ahead and visit. They, they are using Elastic across the board. So Informatica has both sil use cases and security and they're utilizing a inmo to shorten the time to insight and resolve incidents faster.

On top of that, they also utilizing uh Elastic powerful cm offering to accelerate fixing security issues before they become a problem. That's why the quote is here as they, they have 8.5 billion events per day that they're utilizing a platform to simply sift through and investigate the different security threats they're having and to give you some statistics about Informatica usage of Elastic.

So they have more than 60,000 shards and ingesting 37 terabytes per day, 200 plus data nodes to just mention a few of them.

So we asked them some of the users and this was one of the quotes that they came back with ECO is going to change everything and we have been looking forward to it for many will be forward to it for many years. Once released, it will be our primary query expression language. And when they saw the ability to be able to aggregate filter to look ups, i it's a huge time saver if you.

Yeah, they're looking very much forward to it. One of the examples um that we talked about how will you, how will they actually use ESQL? So they will use it to aggregate transform, calculate and search metrics, logs and traces, pinpointing performance bottlenecks and system issues reducing time to resolution by removing the hassle of navigating multiple screens.

And then also of course, build observable security alerts with aggregated values as a as a threshold. So on the right here, we're seeing how ESQO alerting feature is working where you actually can utilize the language and set up alert rule type.

Second thing that we're talking about is of course the visualization of it. So dev ops teams are constantly in a race against time, deploying multiple updates, patches and new features. So, visualizing performance bottlenecks and system issues with ESQO charts is super important for them and can help them a lot.

I pass it over to James. I was going to do a demo.

Thanks. Ok, good afternoon, everyone. Let me just switch here. Awesome. Thanks for sticking around for the afternoon session. I know it's been a long day. So I'm gonna give you quite a few different demos. Um I'm gonna start with a security one because I am a security guy. I'm on a product management team for security, talking about how we can pivot from detections to threat hunting to detect like back to detections and do all of that really cool stuff with ESQL. Uh, I have a bunch of other demos as well, but I think this one will really resonate, especially if you are a security user at the room. Uh, if you're not, it will also resonate because ESQL is pretty awesome.

all right. so right now we're in our alerts view within elastic security. this is basically the page where uh if something bad is happening in a user's environment, they get alerted on it based off of our rules that we provide or any custom rules that they've written uh over here.

i'm looking at a rule where it said, hey, you have a potentially malicious host name that has been queried uh via dns, right? and i can see here basically all the details. hopefully, you can see that on the screen, but there's a dns question name of cd n verify.net over here, right?

so typically the workflow of someone looking at these alerts would be ok. let me, let me look into this uh successful dns look up and get some information about, for example, the resolved ip address, so on and so forth.

so what we're going to do is we're going to pivot to our timeline view and i'll talk about the uh experience of what existed before e sql and then how e sql will complement that existing behavior.

so within the timeline, a user can do several different queries. one of them is like a very basic kql query. so what i've done here is i filtered using kql syntax really simply, hey, we're going to look for event types where there was a dns look up result and that outcome was successful and specifically for the dns question name, which is cd n verified.net and we have quite a few results here.

ok. um so we can see here cd m verify.net if i look deeper into the actual event. uh one thing i'd like to know is, hey, listen, what, what uh did this name resolve to from the dns perspective now because of the way we collect these logs, which is uh an event type on windows called event tracing for windows where we get uh dns events, they don't actually give us the field in a nice format. it's actually part of the message field here where they say, hey, your dns query is completed. it was actually resolved to this ip address that you can see there, but it's not parsed out as part of the default data set.

uh but that's important to me, i'd love to be able to use that ip field to do some further investigation, so on and so forth. so, um prior to e sql being able to work with something that's un parsed like this, those were pain. if you're used to elastic, you know, if you asked elastic, how you would deal with this. our primary response would be oh create an ninjas pipeline and re index or something like that, which was never a great answer to give you.

uh so now with e sql, we can tell you do it as part of the query. so i'm gonna pivot to my e sql tab here. you can see it's part of the same experience, right? i haven't left my security um solution area. i'm still as an analyst focused uh where i know where to be and what we're gonna do is we're gonna build this query together.

so uh e sql as nino stuff was saying, we start off by declaring, hey, this is the index or indices that i want to query. whereas previously with something like kql, you would select this from the drop down picker. uh it's now part of the query itself. so we're gonna look at those uh endpoint events, specifically the network events. ok. great. that's the first line.

let's do some basic filtering. let's apply the same filters that we had before in kql. so we have our first pipe and you can see the where action there. i just say the word command where we're gonna filter pretty straightforward, nothing too different.

so far, i'm gonna run it like this just so you can see sort of what we get and we get results like you would expect in previous experiences with kql, nothing too different, but here's where it's different.

so let's parse out that ip address from the message field. so we're going to use our dissect command dissect is something we've had available in logstash and in just pipelines for a while, we've brought it into e sql, the same syntax you're used to. so it's not different. uh dissect works in the exact same way.

so we're gonna parse it out into a new field called dns dot resolved. ip. really, really simply, this is basically saying the way dissect works is just follow this pattern and when you find the match extracted out, so why don't we go ahead and do that and now it's extracted out.

um you'll notice as well that we're not really waiting, you know. so how quick that was even though we're doing like in line parsing over the last seven days worth of dn ss events, uh we, we didn't have to wait at all, which is some something d staff was alluding to. we build this for speed and scale.

let's do some addition filtering. we're only interested in events where that new field we just passed out is not null. uh let's limit it to the top 1000 results and then let's only keep the fields we're interested in, right? like we're, we're returning quite a few fields here depending on what we have in that data set. let's only keep the fields relevant for me. as a security analyst doesn't doing my investigation and voila we have our results there really, really easy to get information that was previously inaccessible before.

and like i was saying, we didn't even have to wait. it was just all there readily, readily available for me. now, of course, we can do more with it, right? like we know we had 167 attempts of cd n verify.net that resolve to disappear address. why don't we go ahead now and start using some of the charting capabilities of e sql to give me a better picture seeing 167 hits like this as a security analyst doesn't help me too much, but when we chart it out, it gets even better.

so i have a similar query here where we have the same filters to start, but i'm gonna create some new fields uh using our date functions uh to do some bucketing. and then i'm also gonna do some aggregations. so what we're gonna do is we're gonna create a new field called dns count, which is gonna count all the records and we're gonna split them by the previous field. we just created date, which is gonna bucket into one day time stamps. uh the host name and the dns question, do some limiting and then i can keep the same results, the fields we're interested in and what this gives us now is this chart.

and you notice as part of my query, i didn't have to declare a chart. what we're doing is we're automatically suggesting charts depending on the output of the data returned. uh and this is saying, listen for the type of data you're returning here is the chart which make, which makes uh which make most sense.

and what's interesting. now, i can see this chart just by looking at it, we have these dnf dns events more or less around 48 requests per day, which is because they're the same from a security analyst persona perspective, it indicates persistence. so what we now know is, hey, you've had this threat making these requests to this dns name uh very persistently 48 per day. that's two per hour, very typical for like c two communication, um really, really powerful stuff.

and again, we didn't have to leave our search area prior to this. you'd have to go into our visualized tab, create a visualization so on and so forth. what's even nicer is we've given you editing capability right within here as well. so if you wanted to change any of this, change the color scheme, whatever you want to do, you can and you can also save it to a dashboard right within here and you can create a new dashboard right within here.

so as nino stuff was saying, you know, um six different areas or so where you have to pivot before you are now working in one zone. uh thanks to the power of e sql, but we're still only scratching the surface here. these were pretty simplistic queries, to be honest, let's let's dive a bit deeper.

so i have another query here. this is pretty significant. it's um 18 lines. and what this query is attempting to do is i i put this together specifically to highlight how much you can do now within the query bar, which would have typically taken 78 or even nine steps and different areas of the elastic stack to do.

so this query uh what this is looking for. i had a use case as a security analyst where listen, i want to be able to find out if there are users, real users. so human beings not service accounts, not any other account, which isn't a human attempting to transfer data outside of my network greater than a gigabyte within uh size from multiple different hosts on a given day. pretty standard use case for security prior to this, it was not easy to do. uh you could do it, it was just really uh convoluted ways of doing it.

so what the screw is doing and i'll walk you through it line by line just and hopefully gives you an idea of some of the functions that we've added here to make your lives easier.

so again, some basic filtering, uh you can see here we're doing like crc id r in notation to filter out uh internal traffic. again, we're already interested in data, leaving our organization. and we're also looking specifically for traffic that has an assigned autonomous system name. this is something we add to our data. so we're just making sure that that's part of it.

we're doing some date math again, more than date math, which is basically bucketing and creating again we wanted per day. so we're gonna say, hey, let's truncate the truncate the time stamp to a day. and then we want to format it in a human readable value rather than something which is difficult for a human to read.

and then this is one of my favorite parts where we're doing three aggregations at once in one line. really, really simply if you've ever used three ds l or anything else to do this, you know, this would have been a really long blob of json. now, it's one line. so we're doing an aggregation to count all the traffic. we're gonna do a unique count of the hosts because remember more than one host, we want to see this traffic coming out of. we're gonna sum up the source bys because we're interested in seeing if it was a gigabyte or, or less. and then we're going to split it by the user, the process which made the request the day on which it happened. and also that autonomous system name.

the next line is also one of my favorites where we're going to use our enrich command. enrich is um allowing us to do what you typically do in a ninja pipeline with the enrich command. but now as part of the query, uh if any of you are familiar with uh other languages, some of them uh uh call this a look up. uh we named it enriched just to align it with what we do in the in interest side, but now you can do it in line.

so what we're gonna do with this enriched command is we're gonna grab that user and we're going to check it against a list of users that we have to see what group they belong to lld group or whatever. remember we're interested in real users here. so we're going to filter out anything which is not a real user. and we're going to use the enriched command to do this.

so you can see here after i do that. and then we're going to say, ok, let's make sure we have a user name and a group name. let's grab that total bytes value and transform it from bytes to gigabytes. so it's a very simple matter and also some rounding.

uh well, let's make sure we're not showing results which are zero gigabytes. and then we're using a special format of the eval command to create a case statement. so basically, we're doing like a conditional check, say, listen, if the total gigabytes are greater than one and the unique host count is greater or equal to two, then create a new field called follow up and assign a value of true if it matches or false. if it doesn't as a security analyst, what this allows me to do is get an immediate field that gives me a reaction to say, ok, i need to follow up on this incident or not.

we're gonna filter specifically for those follow up events that are true. and again, we're not going to show uh where we have groups

Uh I should say users within the group of system users. Uh we're gonna limit to the top 10 results. We're gonna sort by total gigabytes and sending and then we're gonna keep the fields of interest and the result here is this, there was this one person James P I don't know who that is. Uh it seems like from Chrome, they exfiltrated over four gigabytes of data from two unique hosts um sending it to Google.

Um it was funny when I was, when I was simulating this behavior, I just launched up Google Drive and started chucking random files into it. Hopefully, that was ok for our four se team. But we'll see.

Um and you can see here it's for this specific day, right? So we're bucking it per bucketing it per day. And again, I just want to highlight this, this 18 lines of esq r query would have taken a lot of effort to do outside of esq.

Now, of course, a search is great. What's even better is if I can just get alerted on it, right? So within the secret solution, we added the ability to create detection rules based on a v sql. This is pretty much the same example i was showing uh in an alert format here. So you can see the same or very similar query definition. I just dropped it down to 500 megabytes instead of a gigabyte. Uh but you can now create these detections uh as if you were creating anything else.

This is really powerful for a security user now because we've given them full control over the output of the query. Whereas before the, the query types, we had, they weren't that flexible in terms of what you want to output. You can now have full control over that.

And just to show you if i go into edit root settings here, you can see there's a new rule type for e sql and pretty much you can have the same query that we saw before uh when it came to the timeline. So that's some of our security use cases.

Of course, it's not just available within security. I thought i'd start out with that because again, i'm a security guy and i like talking about security as much as i can, but let's put it to something like discover.

So this is a pretty interesting use case. Um what i wanted to calculate, uh let's say, from a observable perspective. Now, we're bringing in of course, day to day, a bunch of different log types, right? And it's very normal for all of us as we're bringing in different logs, we have various different ingest pipelines, we have logs coming in from different locations.

So sometimes there's a bit of a latency right from when the actual event was created to when it was actually ingested into elastic search. But one of our goals should be to try and reduce that interest latency as much as possible. The way to start that is identifying which data sources have particularly high in just latency.

So i'm doing quite a bit of work with some data mat here. Uh converting it into a double format to do some like millisecond extractions and stuff like that. You can see this auto bucket function which is really cool. So uh you can basically say, hey, take this time stamp, i want 24 buckets between this date range and that date range. And you also notice functions like the n function, which is really handy and you can also do things like now minus one day. So very human readable, very, very easy.

And this is the result we get. So you can see here. Um this is all we have several different data sources here. But our aws vpc flow logs are taking the longest to ingest for whatever reason, taking an average of 100 and 97 seconds, uh which is not good, right? We should probably reduce that. Everything else is pretty slow. So 26 seconds, 1911 8, right?

So this query just really quickly working with some data map using the different time stamps available within the documents themselves. Um allowed me to just identify straight away our vpc flow logs, take the longest to ingest what can we do about that going forward?

Now, of course, uh nl started this as well, but right from discover, if we wanted to alert on this, we can, so we can create a, a threshold rule, it will populate the query straight from the query bar. And uh then we can create an alert really, really easily. And you can see, then we have all the other action types depending on what you want to do to follow up really, really powerful stuff.

However, that's not only what we can do. So uh e sql itself as n staff was explaining, it has its own new api end point. All right. So because we created a new query engine, we didn't put it under the, the same underscore search endpoint, we created a brand new one.

Um and this is what it looks like. So first, what i'll do is show you different ways you can run it right within dev tools. So you can see here, i'll show two different variants. So you can see we're posting to the underscore query and point. And if i want to really simply, i can just put in the e sql query instead. So imagine doing this with queer del, you know how many lines of jason that would be.

Um so you can even reduce, like if you're calling this from your applications or wherever you're calling us from programmatically, you can really, really reduce the size of that. And i'll just run this here. And i also use this opportunity to show some things we've done to um help diagnose.

So this is an expensive query. And one of the reasons i'm actually showing this is to show what we've done from like a task manager perspective. So you can see here, it's really easy to identify if you have long running queries now. Uh so we put the query right in there. You can see how long it's taken uh if you want to cancel it perhaps. But traditionally with queries, this was not easy.

Yes, everything made it into the task manager but identifying which query is causing um these delayed tasks was typically not the easiest thing to do. So i wanted to make sure with e sql. Uh we made this easier. Also from the elastic search logging perspective, we actually log every single query invocation and we tell you how long it took.

So uh we try to add as as many options as possible to diagnose any issues you're having with uh e sql and you can see here the query finish. Now, um by default, the json response is gonna uh return columns and values. However, if you wanted something a bit better, we do support things like text format and cs v and a bunch of others.

I'll use that opportunity to show you also that if you wanted to, you can combine um e sql with queer ds l filters. So that's what i'm doing here instead of doing the uh data range filtering as part of the sq query. I'm just doing it in the traditional query ds l way some people would want that that's totally fine. Uh and i also use the output um the text output for this one as an example.

So just to highlight some of the things we've done, we know many of you will use this programmatically rather than through kan or otherwise, we just really wanted to make sure you're aware that listen, if you wanted to start thinking about any existing applications you have, which are querying elastic search with some convoluted long query ds l, you can now start to think about, hey, let's convert some of these two esq queries.

And here we go, this is the, the text format of those same results, but easier to digest. However, you're using them programmatically, uh you will have that option available and there's more. So within our docs, we list out all the different types of outputs you can get from the query endpoint.

Uh one more thing i wanted to show you, which is uh you know, staff mentioned, we made our assistants aware of e sql if you haven't been following elastic is doing a lot of work in the genitive a i space. And one of the things we did was create a i assistance which basically integrate with large language models to help with anything from dealing with security threats to query generation.

Now, of course, there was no large language model that was aware of e sql because we were developing it, there was no public knowledge about it. They couldn't suck up anything to learn on that. So we had to create this whole concept of a knowledge base to be able to ask questions about the e sql and get results back.

So here i have a long running thread with our assistant about crafting e sql queries. This is really, really helpful. This is a brand new language instead of sifting through all the documentation even though we've made it easy and we provide actually something i should probably show. If i just go back to discover, we do provide like an in line guide for e sql as well, which is very, very helpful.

But still sometimes i just have like a natural language use case where i want to query for, i don't want to mess around with all the different commands and that's where the assistant really comes in and this is true for both our security assistant as well as observable. We've both done the work here for this.

So within my long running thread, i just use this as different examples to show how powerful it is. But the first example i have is literally, i'm asking the assistant hopefully shows on the screen. I want to detect brute force attempts against azure ad give me an e sql query to do that. And it spits out a perfectly valid uh semantically correct e sql query. And it explains it to me how cool is that, right?

And um you can see so it can be really vague, but it can also be really descriptive. So you can say, hey, listen, um i have this particular index. I want to use these fields to do xy and z give me the query to do it and it will do it.

Uh one of the features i really like for using the assistant for this is, you know, sometimes you create queries but um i don't want to go through the hassle of commenting them out for my colleagues, right? So i just use the assistant to do that. So uh i basically give it a query and say, hey, can you go in line by line and add a comment about what each line is doing and there we go.

So here's an example of that perfect complement to a user crafting these queries and now i have a fully commented query which i didn't have to spend the time to do myself. I just used the assistant to do it. That's what i want demo for you today.

Uh if you did want to see more detail in terms of demos uh to stop by our booth or at booth 880. And as neil said, we also have a public demo instance which you can use. Uh i'm gonna hand back over to neno now to uh close this out through the presentation. And then i think at the end, we'll also have a aq and a no thanks ss so to recap, we have a couple of key benefits based on the e scroll engine and e scroll language and the unified experience as well.

So first of all, what we're trying to accomplish is really show the time to insight. I really hope that you by james damo could see that not only was he able to do a lot of things just in a query, but he was actually able to do it in alerts in discover in security as well.

Um and he done it, he done it with a great query speed. Uh he mentioned it shortly but it's retrieving data much faster because it has a more powerful engine. The third one is we are really trying to reduce the friction of bringing data into uh elastic search. So this can also be utilized as part of getting the data ingested.

Um so what we are trying to accomplish is also the post just data processing. So meaning that when the data has uh gotten into elastic search and you're querying what we would like to enable our analyst to do is simply do the ad hoc data exploration without ne ne without needing to go to back to re indexing the data. So that is super powerful because uh teams a re not working directly together and they can actually go through the process and finding whatever they need to investigate uh improving alerting, we show that as well.

Um so by enabling the ability to aggregate the um the the alert rule rule types, it, it's super super powerful and can enable users to, to do much, much more than what they can, they can do today.

Um and then of course, the new transformative search engine um that, that we are enabling. So, yeah, this is what we have to for you today in terms of e sql.

Um are there any any questions you would like to have a q and a i mean for me and james?

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值