Avoiding 5 missteps that undermine your AI readiness and success (sponsored by Qlik) (Qlik)

Brendan Grady: All right, good afternoon, everyone. How's everyone doing? That's, that's got, I got a little applause. That's good. Um congratulations making it all the way over to Mandalay Bay. Um for those of you that, that had to take the Uber helicopter. It must have been fun. Nice, nice view.

So, um I'm, my name is Brendan Grady. I'm the general manager of Clicks Analytics business unit. Um I have a full responsibility for all of our AI and analytics products and initiatives. Uh today I am joined, I'm thrilled to be joined by Dr. Jay Ganesh.

Jay Ganesh: Um and just give an introduction afternoon, everyone and uh thanks for making the band li once again, my name is Jay Jay Ganesh. I'm based out of Bangalore, India. I'm the chief product officer for Harmon. I work in the area of AI uh quantum computing generative AI. Yes, good to be here and looking forward to the conversation.

Brendan: So we're going to talk about avoiding some missteps as you start thinking about AI. So this is the official title. The unofficial title is, we're going to help you keep your, your organization's name out of the noose right. We don't want, we don't want you to end up there. Right. We don't want you to make any of these mistakes. So we're going to focus in on that.

Um, I think it's safe to say that we're living in an uncertain world. I think the only thing that is certain for any of us is a heck of a lot of uncertainty out there. Right. There's zero doubt about that. We've been talking about that at C Click for, for, for several years, right? And everything on the horizon, whether it's wars, whether it's looming recession, it's, it's getting a little scary. And so the last year for all of us that are in the analytics space has been pretty exhilarating and pretty exciting this generative AI thing. I think it's here to stay. It seems like it's taken off, right? Sort of like the interweb, right? It, it came out and it's here to stay. But the interesting thing for all of us in the analytics and data space is that's actually caused uncertainty for us, right? What does it mean for us as, as vendors who provide this? Right? It's disrupting us.

And so as we think about as we think about it, I like to, I'm going to date myself here a little bit. I like to go back to the mid nineties and on the mid nineties, there was a company that made film, camera film and they had a debate about what to do. They heard about this new digital photography technology and they said we have an idea, we'll put gold on the film. We're at that type of moment. Are we going to continue to just look at the world and the way we have and use traditional analytics techniques or are we going to, we're going to embrace it in these uncertain times?

Organizations are turning to IA i generative AI exploded onto the marketplace. Who here, may I, can I get to show you hints? Who here has kicked the tires on chat? GPT personally who here has to refer to it as your junior copywriter internally? Ok. Good. Yeah. So we do as well. That's funny. Right. So we use it. So we, everyone's sort of kicking the tires on that. And so as I really think about this, what does it mean for us? What is it going to do? What is it going to do to, to your, your worlds? And as you think about bringing AI into organizations, there's a couple of things that can go wrong. There's ways that this has gone very well for organizations, there are ways that it has gone very badly for organizations and here's the reality when it goes badly. You end up in the news, you do. You just do and look, I'm going to recommend everybody before you go down the AI path or even a business intelligence path. Just go to google, google AI or artificial intelligence failures, or business intelligence failures and get ready to scroll because it's not just one page. There are dozens of pages and dozens of stories.

Some of these we've heard about in the news. Some of the examples I have up here right now and it came down to many of these came because the data set wasn't incorrectly and it just spewed bad answers at people. Others was bias and hallucination. So these stories are very public and so just go take a look at it when I was talking to Jay before and talking to a few people. I have a good friend who runs operations for a software company and said, Brennan, I got to go do AI, what, what do you mean do AI, he said, I don't know, my CEO said I need AI on stuff and so I sent him this presentation and so we'll get into it right now.

So what's the first thing that can go wrong in the year 2023? I cannot believe we're having this conversation but not actually trying to solve a business problem throwing technology because it's the latest and greatest and coolest thing that everybody wants to do. The question that you need to ask yourself as you start to go down this path. Are you trying to change the way you make decisions or are you just doing a science project because it's really fun. Right. And really cool. And you want to kick the tires on it. So it's, it's interesting for me that we're here in 2023 and we're still running into some of these challenges. But there are a couple of ways we want you to think about it and we'll get into some tactical sides. But, but first and foremost, artificial intelligence and business intelligence is about decision making and making decisions in an intelligent manner.

So as you engage, if you're a business user or if you're a technical user, ask yourself the question, what decision are you trying to, to help with here? Right. What decision are you actually trying to make? The other thing is don't just do it because it's fun. Believe me, I'm a nerd. I love it. I love, I love playing with technology, but we have an organization to run. So we really think about what the business impact is and then define success. Make sure it's measurable, measure it in terms of saving money making money or especially we'll get into this one later. Managing risk.

Now, Jay, I know that you have had dozens of initiatives around here. How did you think about this first misstep? I know you haven't made any knock on wood yet, but I don't think you are. How have you thought about it?

Jay: I think when you're looking at risk return analysis, you need to look at the investment from a holistic perspective because if you look at the AI going wrong, it can go wrong at multiple places, right. So this is where first you start with the effort and impact analysis. You know, you typically start with shortlisting all the use cases, you source use cases from across the company and and do a detailed in depth analysis of the returns you get and the effort you need to put in, right. So you start identifying, you know, you put it on a quadrant and trying to figure out which one should I go after? Once you have shortlisted the use cases, you get into the tricky part of understanding what data these use cases need to run properly, right?

So that is the first critical step of this entire process where you actually see that there can be tricky data sets which which is needed to make these decisions to make these models work. For example, you know, you know, there could be data sets with PII data hidden inside them. For example, if you're doing a unstructured data processing and a model which is looking at unstructured data, you need to ensure that there is no PII data creeping in, right? So don't use PII data, don't use a personal information of the user, right? So that's the next step where you actually go around identifying the variables which can potentially impact, you know the outcomes.

The second step is you start looking at the uh you know, then you start looking actually into the in depth of the data. And then you start looking at if there is a bias in the data somewhere, right? Because as you know that you know, when you collect data and try to model them and they can be hidden biases in the data. And that's when you do a detailed data analysis, right?

So and some of the examples you spoke about the AI going wrong purely happened because the model was trained on a certain type of data set. It is not a reflection of the entirety of the world, right? So that's the first step. Look at the data, identify where the data curves look like, how do the data distribution looks like. And if there is an extremely skewed data set, try to figure out how can I get improve the data, right? Either you get actual real world data or you try to create data, which is similar as a representation to the real world, then you come to the next part. Once the data is cleaned up, you start building the model, you look at the model's performance, right? And that's when you look at the model bias, right.

I'm listing though the entire process because errors and problems of the type of AI failing can creep across each one of the stages. So business impact isn't only you're going to gain money, but it's also protecting yourself. I mean that I mean, as a, as the company that you're in, I mean, having a news story about where you might have exposed PII debt, it would be an absolute nightmare and the business impact can be losing money, it can be a stock price going.

Brendan: Absolutely, absolutely down the too. So interesting. So I love the fact that you, it's almost like you're leading me to this next slide about data. So data is really, really, really, really, really important, right? And the thing about data, especially in the generative AI space ensuring that you, you think about these three recommendations and what Jay which is talking about. But I want you to really ask yourself four questions around your data. Ok

First and foremost, do you have all the data that you need? Do you have access to everything you need to do the analysis?

All right. Second question I would caution you on is, is it accurate or could it mislead you or could it lead to biases? Right? Could it, could it lead to biases?

The third? Is, is it misleading? Is it could be a hallucination? I'll give you a great example. I just got my job back in May and I decided to call my junior copywriter if you know what I mean? All right to write my bio and it's amazing. We were talking in the bus. I got, I got the most amazing degree ever. I got my PhD in applied mathematics according to my junior copywriter completely wrong, but it hallucinated, it didn't know who I was and I couldn't figure out where the data come from.

So really understanding that is absolutely critical and then the final piece is it, is it being misrepresented? So make sure that you have a way to control that and you're going to need a lot of data, right? The AI space is, is going to require you to bring more data than I think any of us have ever possibly imagined.

And so the thing, the thing about it about readiness is that you really want to focus in on the variety of data that you have, right? So are you bringing in the different types of data that you need? Is it governed? Is or is it the wild west? Are you bringing in a bunch of public data that you've had no clue where it came from? Can you actually get to that ensure that you're going to provide whatever decision you want to make that it's consumable?

And when we say consumable insights, people think dashboards doesn't mean it's a dashboard. A dashboard is just one way of consuming an insight, consuming an insight as part of a business process where it's surfacing an answer, right? It's giving you a recommendation and then the final piece is, please, please, please just don't do analysis for the sake of it. Connect what you're doing to making your business decisions connected into your core processes.

I'm not saying that you, well, we'll get into a little bit about humans here, but you, you need to bring some of these things back into your core processes and systems. Otherwise you're just doing analysis for the sake of analysis.

Did I pretty much hit what you were talking about? Did I miss anything you did?

11 additional thing I also want to put in here is when you're looking at the data problem, it is not just a data problem of data within the enterprise, many of the times for a business model or a decision model to work properly, you might want to bring in data from outside the enterprise, right?

That that data typically is not under your control, which means that the pitfalls which Brandon spoke about in terms of the skewness of the data, the quality of the data and uh and all the variety you need for your models could be completely different from the pure data sets you have within the enterprise.

So you need to be doubly sure, particularly when you're bringing in external data sets where you don't have control over the quality of the data to put additional control mechanisms in place to make it work for you.

That's, it's fascinating. And, and again, you're leading me to sort of the next area that we need to think about is risks, right? We've all for those of you that have kicked the tires on on some of this generative AI with some of the open models who here has found some real wonkiness in it. Really some weirdness. Everybody found some weirdness. Yeah, it's, and so think about bringing this into the enterprise and thinking about what you need to do.

There are inherent risks in this, in when you're making decisions. And so the other thing is the data privacy, the world is changing, right? The world is changing around data privacy now. Anybody from Europe here. Yeah, data privacy in Europe is is definitely a topic and how are, how are you going to have to think about this? It's a massive challenge and you saw recently the Biden administration here in the US, the EU is releasing some new guidelines and laws, this is really going to impact how we need to think about that.

The other thing that you really need to think about is is the lawsuits that are happening, everyone has seen the lawsuits about copyright infringement, they're going to lose. They are it clearly went against that. So you've got to consider these risks and I mean, I think that the industry you're in, right? I mean, it would be very easily for you to use a public model and probably get some proprietary information that was has been leaked. I mean, how are you thinking about risk and security and privacy?

So, you know, the risk and security and privacy are extremely critical when you're building any of the models, right? Let's address one by one. I spoke about the privacy of the data privacy of the, you know, the data variables. You're looking at the first, you know, mandate we have for any of our development operations is to exclude any PII data, right?

So just to ensure that, you know, there is now no such information about the individual which goes in, right. So, you know, and we we are extremely careful about, you know, even using demographic data, which can potentially be re-engineered to go back to what the original data set is, right?

So there are strict controls in place in terms of how, what part of the data you can use as part of the models, right? PII strict. No, no, even if you are using demographic data, we try to obfuscate the data as much as possible so that there is no reengineering possible, the security of the data is extremely critical and this is where we have multiple encryption mechanisms in place across because a lot of our processing happens on the cloud, we also ensure that there are controls in place, multi factor authentication and and every single step and finally governance and every single step in this entire model life cycle is monitored, which means that we have a solid MLOps pipeline in place, which essentially you know, can track when, when the model was committed by which developer at what point in time which ensures that as the model moves in its life cycle, as more and more improvements happen, we can trace back if there is an audit and very few and many, many companies have audit mechanisms for models.

So you can actually trace back because when you're typically building machine learning models, you iterate hundreds of times, right? So you know, sometimes you there are a team of people of 10 people working together, you probably have 1000 version of the model, the same specific model we iterate and you choose the best one out there.

So each one of these iterations is captured at any point of time in a system which which we can actually trace back and say, which was a model which was chosen and what is the decision for taking the model? Right. So that it shows that there is a solid governance mechanism in place.

And we really, so at Click, we really talk about making sure that you have a trusted data platform, right? So that you have the data that you need all of you in the room. I i can't imagine if I asked you if you had too much data, but I can't imagine what the answer would be. We all know the answer to that question. You all have too much data and by the way, it's growing every day, we know that.

So, but that's good news. That's actually good news for large language models. If you're going down this path, large language models like a lot of data, actually standard ML models like a lot of data as well. So really thinking about that and making sure you're bringing that all of that data in with all the variety secondary is as, as she pointed out, making sure you're getting updates in this data and making sure it's regularly getting updated with information.

I recently got a call from my, my cable provider who was working with him that's about 15 years old on me. It was very fascinating to hear about trying to sell me something in a state where i no longer live but making sure that it's always up to date and then the risk of about governance, right? Jay just talked about PII data, we talk about data residency. You've got to make sure that you have some governance around this or you are exposing yourselves.

But the really interesting thing for me right now is, is generative AI is like that new bike you get on Christmas day, right? It's really, it's the shiny new toy. Everyone loves it. Everyone wants to ride it. But the reality is, is if you leave Woody behind, Woody, you can't leave Woody behind for all of you that are Toy Story fans.

Generative AI is a complimentary to traditional AI techniques and you can't forget that we all, we all love the Chinese new toy, right? We got it. But the promise of, of what we are actually seeing is so fascinating. And if any of you follow Gartner's hype cycle, this is like the tippy, tippy, tippy, tippy top of the hype cycle right now on generative AI.

So really think about that. What happens after the peak of the peak of expectations, the trough of disillusionment that is coming stuff is going to happen here. So as you go down this path, think about how these these two can be complementary. And if you come by our booth, we actually have a really good example of traditional AI techniques combined with generative AI using actually one of the announcements they made here around Bedrock, right?

So it's a really interesting thing we're doing to bring both of these techniques together. Have you abandoned your traditional AI techniques? Absolutely not because most of the uh models right now are running on traditional models, be it classification typical dire models? You know, could we image analytics, computer vision models, they're all running on traditional AI, right.

So it's it's a mix of both. And as we progress further, you'll see more and more of these models merging and working together at various types of handoffs. Because if you look at enterprise data, it is complex, you have structured data, unstructured data in the same, you know, data set, right?

If you look at a NoSQL database, you pretty much find all types of data sets running in so this is where these two are going to complement and enterprises need to choose like which models. It's not as if we are abandoning entire traditional AI and shifting lock stock and barrel towards gen its, they are going to coexist and they are going to complement each other.

Yeah. And using it for automated generating automated insights where you're actually saying, hey, you should probably go look at this, right? Traditional AI technique things around predictive modeling, right? I loved my stats class in college. I knew I'd use it eventually. And here I am all of those years later, knowing predictive analytics is important, right?

Those are traditional things like what is my, what are my sales going to be for this quarter? Answering questions like that is great for traditional models alerting, letting people know what's going to happen is a pretty interesting thing. I love getting a text on my phone saying, hey, this is likely to happen and telling me what to go do.

And then finally, natural language processing. I have two digital natives at home. I have a 17 year old and a 21 year old. They do not want to go to a dashboard. They're never going to a dashboard. They want to pick up their phone and they want to use natural language and say give me an answer about XY or Z and spit it back to me in a TikTok video. That's literally what my digital natives want to do. And we are all going to experience that as more and more digital natives come in into the marketplace.

And then finally, where generative AI is really strong is around content generation and summarization, right? Helping you tell that story. And as Jay pointed out, bringing some of that enterprise data was structured and unstructured together, that's nirvana and we're close, the technology is there, we're almost there.

So as we think about this, going down the right path and the wrong path is really interesting, we've all kicked the tires on these open models and we've all seen some of the challenges there. Most of the companies that we are talking to right now, including our friends at Hart and are really, really tending to lean towards private models, right? And, and using proprietary data, there's always going to be a need to bring in some of that public information, especially the industry you're in. I mean, how are you actually thinking about it? What model path are you starting to go down?

So, so when we started looking at large language model adoption within the enterprise, one of the interesting things when we started talking around to our people, what we found that they want the same experience that they get in ChatGPT on enterprise data.

Can I ask the same type of questions which I am asking to my enterprise data, which is not possible because you know that is trained on a completely different data set. And enterprise data sets are completely different. They are proprietary, they are within the company, et cetera.

So we wanted to solve this problem for decision makers and, and the way we are trying to do in the company is we ended up building our own private large language model, which means that we build something similar to what GPT is doing, but it does exactly same or even better for enterprise data sets because enterprise data sets are smaller.

Whereas you know, companies like OpenAI has built their great models on extremely large data set, but enterprise data sets are not so not as big as you know, the global data set. What we did was we started using one of the foundation models. In our case, we use one of the open source foundation models called Falcon. And what we did was we started fine tuning Falcon and retraining Falcon on a proprietary data set, right.

So we took health care as a scenario to demonstrate this is possible internally, we collect your data from clinical trials and we collected a humongous amount of data pretty much all the clinical trials happening across the world if they're not published in the public domain, extremely unstructured data. And we use this data to actually build a private large language model, we call it HealthGPT.

And that model is now available internally for users who can actually run completely unstructured queries, the similar queries you are doing in any other public models, you can run these queries on your own data sets, right? So that is one way we are trying to tackle the problem of bringing in the power of what people see in the public domain to more restricted enterprise domain, especially the health care space.

I mean, that that is a massive challenge for you. I mean, it's really, it's good. So we said there are five missteps. So it seems like we're done, right? We're on number five, we can probably go one more thing. It's sort of like Steve Jobs with the pocket. All right. So there's one more thing we want you to think about from all I can tell we're all humans in the room, right? Any aliens, no aliens in the room

Humans do not forget the people in this process. It is absolutely critical. AI is going to transform the way many of our organizations work. It is, it is going to transform it. Many people think it's going to replace human intuition. They do. Um I was talking to someone on my team the other day and they said, wow, you know, do you think I'm going to get replaced? And I said, mm I think you're going to get replaced.

I don't think this is SkyNet. It's not going to be self aware, right? It's not going to be the rise of the machines. I can terminate or, but one thing I do know is that managers that embrace AI, they're going to replace managers that don't embrace AI, that's what's going to happen is you need to think about that.

And I look at AI, and I, and again, I'll mention I'm a bit of a nerd in case you didn't pick up on that. I am a huge Marvel Cinematic Universe fan. I'm an MCU person. I look at AI like Iron Man and anybody that knows what Iron Man is. Iron Man is this suit made of iron, it's actually vibranium and it was built by this person named Tony Stark, right? Built the suit. He added this thing called Jarvis, which was the smarts, but the suits didn't work without Tony Stark being inside it.

So you had to have Jarvis, which was AI combined with Tony Stark to really make it work. What fascinated me when we first met probably a month ago now, right? Was you talked a lot about the humans and what you're doing there and it really fascinated me, love for you to share sort of the way you're looking at those decisions with humans, what we call this entire process as human in the loop.

The reason being that ultimately the decisions made by the AI needs to be implemented by a human being or an automated decision needs to be taken, right? For that to happen, you need to have the human, completely comfortable what the AI is proposing. And what we have done within the company is that we have classifying use cases for AI and generative AI. Of course, into three categories, we have the low risk use cases, the medium risk use cases and the high risk use cases.

For example, a low risk use case could be a chat bot which is trained on user manuals or a policy documents which can answer a question from an end user, right? So that that is a low risk use case where you can actually run the model or the chatbot or a decision engine on its own, right? You don't need a human to come in between and correct or take a corrective action or a corrective measure that, that those models are pretty much self running.

You, you end up taking a sample from this entire decision once in a while to ensure that the model is performing up to the mark. So it is low touch, low human in the loop. Then there are the slightly tricky models which are medium risk or medium complexity models where the decision of the model or the output of the model could be up for scrutiny in terms of they can be, you know, potential contentious decision by the model.

For example, there could be models about credit risk of predictions or there could be models where actually you are creating a marketing material generative content, right. So these are scenarios where you cannot let the model run on its own. And this is where human intervention is needed. So these medium risk use cases we recommend are a fairly heavy human in the loop approach, which means that you take a significant sample of the output, we may not look at every single decision, but you took, we take a significant sample of the decisions made by the model and you have the human validating the model, validating the decisions made by the model, right.

So which means that you once the decision is made, you have a human finally checking the decision and then the decision is made, right is implemented, which is which is something which we recommend very strongly if you are in somewhere in the middle type. And finally there the high risk model, the extremely high risk models, which for example, they can be, you know, decisions related to medical outcomes, for example, or even legal document reviews or contract reviews, for example.

So those are decisions where we actually, you know, recommend a heavy human in the loop approach, which means that every single decision made by the model actually goes through a loop where the human he or she validates it and then only the decision is made. So you can take a greater approach, not all decision, all model outcomes need to be human in the loop. You can actually uh you know, put a risk score, some of the models, you can take a sample, the medium risk, you actually have a, a fairly significant human in the loop and high risk models, you pretty much have the human taking or intervening in every single decision of the model.

So this goes back to sort of some of the risk, the risk stuff that we were talking about before, right. Risk management. It's, it's absolutely criminal. How do you, as I was talking about my daughters earlier, right? So these, we were all talking about our kids on the right over. They're a different generation, they have different expectations, right? They, they, they all grew up with AI embedded in everything they do. But that's the really interesting thing. Anybody have children out here, kids, teenagers, by any chance they all grew up with it, right?

So as we think about it, those of us that didn't necessarily grow up as digital natives, we question the data, right? We question the outputs, we question what's put in front of us, right? I think that's safe to say I would say our, our teenagers, I think our, our kids, right? Our teenagers are 20 year olds, they sort of accept this. So how, how do you see that generational shift impacting this human in the loop thing?

What do you think one of the, one of the things i'm seeing with the, the particularly the younger generation is or, you know, the next generation is that they are quite comfortable with sharing a lot more data because they are always used to being on the social media already, always surrounded by the cameras and capturing all your personal information. And maybe they don't know the sensitivity of all this data where it can be used or misused, right.

So they have come up and they have grown up. Whereas us, you know, many of the times, you know, when i see, you know, i was in a conference last week in bangalore and the conference organizers actually wanted to take my picture even for the registration booth, right? Unfortunately, i was a speaker, so i had to go through that, but they can be questions asked as to why do you need to take somebody's picture? Right? Somebody for an event like that.

Similarly, you know, there are, if you walk around this hotel, there are thousands of cameras and each one is capturing something about you. So we need to look at where this data is being used, you know, is it being stored secured? Is it being archived or is this data being destroyed? So these are things extremely critical for us to consider and as part of the governance mechanisms and the risk we plan for these are important decisions as well.

You know, not just the model but also training and retraining the model and taking care of the data, which actually goes into making some of these decisions. It's interesting we talked about data literacy for years, right? And data literacy is about understanding data. I actually see a bit of a shift around AI literacy and the impacts of what it is, what it does is important. But what does it mean for society? And I think we're starting to see some of these conversations crop up and all of you are going to be involved in these, whether you want to or not, you all are going to be involved in these at some point.

So as we think about it, if you don't have an AI strategy, I want everybody to take a deep breath in and take a and let it go out. Just, it's ok. We surveyed a bunch of our customers and a bunch of our prospects to see whether or not they had a strategy. Well, not a lot actually, under, under 40% have an AI strategy. I'm not surprised, right? It's sort of, it's sort of where it is. Some companies like Harman, you do, you have a great AI strategy? And that's the other thing is again, think about your, all of your analytics, right?

I was recently in Sweden and I asked a bunch of our customers a question who here is interested in AI the entire room raised their hands, right? They didn't, I said, who here would take a meeting with us if we were going to come talk about AI, everybody in the room, raised their hands. And then I asked the question and I said, great, who here is just going to get a pretty pivot table that they want to look at after they meet with us, everybody in their hand, raised the room in the, in the room, raise their hands.

So it was a really fascinating concept. Everybody knows they need to go down this path. But early days, there are still some traditional things that we do and don't forget about it. Don't forget about your data scientists either. They bring a ton of value and those techniques can help you. We a we have a point of view around AI and we call it stage, right? And our point of view around AI is, it's not just the technology, it's going to be the way you think about it a lot about the five missteps you heard today.

It's going to be about some of the processes and some of the the ethical considerations that you're going to need to bring here. That's a huge piece of it. And guess what the last piece technology, all of those other things need to be considered first. And so as you walk out of here, what do I want you to walk away with? Right. What I want you to walk away with is ensure that you have a trusted data foundation.

Please please spend time on your data. Please think about where the data came from, please track where it came from, ensure you have governance, ensure you're taking Dr Jay's advice around how you think about that data. The second thing if you're using straight up analytics or a BI tool, please think about augmented analytics, right? Pretty pictures don't make critical business decisions based on pretty pictures, right? Pretty pictures will mislead you. All right. Everybody loves a pivot table with a bar chart. We all do. We've all used one but look deeper and bring AI on top of some of that data.

And then finally expose this to more people, the more people that are empowered to use AI as part of their work, the more successful your organization is going to be Click has been at this for many years. AI is not a new thing for us. Our entire product that was developed 30 years ago, actually, this year where you have a 30 year anniversary, our engine and the way it works was designed to work like the human brain. So it's a really fascinating thing.

So we've been at this, we've been bringing out AI and ML capabilities for years. You're a great customer of ours on that. We have natural language capabilities, we have natural language generation capabilities, we have machine learning for mere mortals, right? So our, our auto ML offering and then, and then finally, just think about what you're trying to do, what decision, what decisions do you want to make. Don't just quote, do AI because it's the cool thing. Do AI because it's going to make your decision smarter with that.

I'd like to thank you for coming all the way from Bangalore. I really appreciate that. I'd like to thank the audience too for attending today. Hopefully, this was beneficial. I promise we would land the plane a little bit earlier. We did, we will be around to answer any questions you want. So enjoy the rest of the conference. Hopefully you all make it back to the mothership down at the Venetian a little bit later. Thanks everyone for your time. Thank you, sir.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值