The secret path to practical generative AI in the enterprise

Thank you everyone for coming. My name is Matt McClarty. I'm the CTO of Boomi. You may know me from Cohos in the API Experience podcast available on all your pod catchers API Experience. Or you might know me from my incredibly unsuccessful music career. But this is not why I'm here today.

I am here to talk to you about the secret path to practical generative AI in the enterprise.

So I know it's all about AI, everything we're hearing in the keynote, like there's unbelievable explosion of innovation that's been happening in the technology world throughout this entire calendar year, right? It's all everyone wants to talk about. I would even say, you know, I don't think we could get more hype around AI than we have right now.

Look at these headlines, right? It's, it's gonna be the next productivity frontier. You're gonna get so much value out of AI or it's gonna replace everybody's job, right? To extremes. It's gonna save the world. No, it's going to destroy the world, right? The possibilities are endless.

So I think we all recognize we've, you know, for those of us who have been in the industry for a while. We've been through waves of technology trends. I don't think I've seen a bigger one than this and, I mean, I remember when the internet was born. Right. So, or, or, or became popular, but the reality is we really don't know what we're doing, but that's ok. Right.

We don't have a clue what we're doing yet. We don't know what we're going to be doing. We know from our experience that everything's going to change. We just don't know how so as much as we're all hyped and we're at maximum excitement about AI and maybe maximum fear and trepidation. We're not really implementing it yet.

So it's, it's really exciting to be here to see real developments, tangible tooling, new platforms that are gonna enable us to do stuff. But what are we going to do? We don't really know yet. Right.

So I'm just here to tell you that's ok. If you're sitting in the room saying we don't know how to get started, we don't know what to do yet. You don't worry, we're all in this together. Right?

But I think we can look at the world of generative AI, calm ourselves down a bit, say, you know, if it's going to destroy the world, it'll take a while. We got, we got a few years left, at least, right.

In the meantime, we can look at gen AI as from a business perspective. Just another step along the evolution of what we maybe a year ago were calling digital transformation. But that's kind of a taboo term now, right? You can't talk about that anymore, bad work.

But if we think about how digital technology has changed business, right? It really digital technology is not so much about the technology. When you think about it, it's more about removing friction between companies, their products and services and their customers, their partners, right?

If you think about how we add business value through digital technologies, it's through more engaging experiences, it's through better automation of processes. It's just, you know, new business models, new products and services. And JAI as exciting and revolutionary as it is from a business standpoint is another step along that evolution.

So when you're evaluating how we're going to get value out of this, think about those things that are the hallmarks of digital business because it's gonna help there.

So I said before, we don't know what we're doing, we don't know what we're going to do. Well, we can guess that if we're going to take advantage of gen AI, we need to have it baked into the infrastructure baked into our architecture in order to take advantage of it.

And we can already see that there's going to be some basic capabilities that are going to be really important. Gen AI in a business context is going to require a lot of contextualization. If you want meaningful results for your business. You're going to have to provide context to these large language models, multimodal transformers, everything that's coming out.

If you want to make it work for your business, you're gonna have to have this idea of session persistence thinking about ways to have sort of longer range useful interactions with the models so that you can keep that context alive.

On the other hand, in order to interact with the models and get the value out of them, you're gonna need to orchestrate those models. You're gonna need to be able to chain calls to the models and maybe most importantly, but the thing that probably people aren't talking about the most is if you want JAI to add value for your business, you have to be able to take the insights, the inferences, the outputs of the models and plug it into the places where your business model works for your company, points of customer engagement, points of employee optimization, all the different friction points in your business landscape.

You want to reduce that friction, you need to integrate the insights from the models into that.

So if I kind of step down from the huge hype train and get into the into the guts of an enterprise, right? I think it's fair to say that generative AI really comes out of this incredible explosion of business intelligence, big data analytics world that we've seen where the models are coming from is from this humongous amounts of data we're able to process.

And so I'm going to take sort of a perspective from the data engineering landscape of pipelines and think about this.

So we kind of have grown up in this world where you've got uh systems of record and systems of engagement, right? And you may have seen this from Gartner things like bi modal IT where organizations are, you know, wanting to be very changeable and fluid with how they interact with their customers, partners employees. But they want us to maintain the integrity of their core systems.

But we've had systems of intelligence for a while in the enterprise where we've been able to do all this analytics work, but those worlds have kind of evolved separately. AI machine learning was already bringing those worlds together, gen AI, the gravity is even stronger.

So really this is about how do we get the data and information from the systems of record in the enterprise into the sys systems of intelligence that are going to produce these new insights and new model inferences? And then how do we then disperse that information into the systems of engagement, right? This is how you're going to get business value out of things.

And so we we've been looking at this um you know, where Boomi plays, we we're very much in the in the integration fabric of our, of our customers infrastructures, you're going to need not necessarily a data pipeline that's going to feed all of your data into something like a data warehouse. But you need to think about now, how do you get a context pipeline built so that you get the right data, the right information fed into the systems of intelligence in order to interact effectively with these models.

Because the reality is maybe a few organizations are going to build their own large language models or multimodal transformers or these gen AI models. But it's an unbelievable amount of horsepower and data and everything that's needed to do that. Pragmatism, at least for a while says we're going to need to use foundational models that are out there and to work with foundational models. It's all about driving context.

And if you think about it, those foundational capabilities I talked before data contextualization, getting those long running sessions of interaction as well as active governance. Something I didn't talk about which is fundamental when we're working with these models, how can you enforce responsible AI? Right, that has to happen in the context pipeline.

But the other side, again, people may be already thinking about this. How do we get context? How do we get the contextual information out of our systems in order to drive more useful insights from these models? But the other side of it, as I said may be even more important.

So you need an action pipeline. I'm not sure if everyone has sort of followed this notion of building agents with AI. This is this is kind of the thing. Now we built, we used to build apps well, in the gen AI world, apparently we're going to build agents, agents as sort of specialized gen AI enabled software components that will allow you to do a related set of tasks.

Even you can even think of it. Something like a a gen AI travel agent could be doing things like pulling down flight information and booking hotel accommodations and so on, right? So we're going to start building these gen AI agents, they need actions to do, do meaningful things.

And so when you've got the insights, when you've got the uh the actual contextualized findings of the models, you need to be able to inject those out into your systems of engagement. And that's where you can do model orchestration because again, we may be all hyped on gen AI. But the reality is in the enterprise, we haven't even got to the point where like something like machine learning is fully, fully uh deployed, right?

We're still a lot of enterprises are still early stages around how they utilize machine learning. Machine learning is still very useful. Gen AI does not get rid of the need for machine learning. Machine learning is extremely contextualized, very specific and things like keyword search index searches, these things are all going to be useful.

So if you're in an enterprise context. And you're looking for a business outcome for a business result. It's not entirely gonna come from gen AI. We run, we've run our own experience experiments where doing something reductive, like coming up with a configuration out of a gen AI model is actually less predictable than using good, old, good old fashioned machine learning, right?

So there's because we're gonna need to utilize not only multiple gen AI models, but also machine learning models and other software components. You have to do that orchestration, you have to integrate the insights into your end points.

And again, you need active governance on that side too. Because when you're calling into all these models, you have to be very careful about what data you're sharing with the foundational models, whether you're deploying them yourselves or you're utilizing third party services.

So I spent a lot of time on this slide for a reason, right? This is the evolving landscape that we're seeing and I can't stress enough how important it is that even though we don't know where we're going, right? When we don't know where we're going, what's the best thing we can do? Keep our options open.

If you're able to build these foundational pipelines in your organization, this will keep your options open for as the path becomes clear and make sure that a lot of the heavy lifting you're going to be doing in the gen AI space is not necessarily with the models themselves. As I said, those are foundational, those are going to be developed externally from your organization.

The heavy lifting is how do you get to your data? How do you get to your end points? How do you connect those things? Because that's what's and how do you do it in a way that will allow you to easily plug and play with models?

I presented something similar a couple of weeks ago at an event and at the time it was before OpenAI had the whole organizational meltdown that happened over that one weekend. And I said, you never know, like OpenAI could be the Napster of gen AI, right? They could, they could pave the way and then, and then disappear, we never know.

So you have to be flexible, you have to be ready to plug and play those models. So if you take this pipeline approach, that's possible.

Now, the great news is here, we are re invent, this is compatible. This approach is compatible. You're gonna have all sorts of services from AWS. If you're in the AWS ecosystem, right? From good old s3 through kinesis and aurora and your end points, your actual systems of engagement may be deployed in ec2 or eks or you know, could be in lambda functions could be, you could be exposing APIs to the outside world to allow your services to be consumed by third parties, right, those services all have to be connected in and yes, there's great tooling that's been introduced around bedrock and, and you know, titan and so on.

But you have other data sources too. Most organizations are in a hybrid mode where they've got data and applications all over the place. So you know, this is very much still compatible with this even in a consolidated infrastructure.

So, you know, if you think about what goes into building these pipelines from a capability standpoint, in order to get the right context, you need to have data governance, you need to have the ability to integrate data and events and and and and event streams. You need to have automation to provide the orchestration and workflow around developing the pipeline.

Similarly, you need that automation to create the actions to get, take the models, orchestrate the model calls injected into your applications. You need the integration to get to those end points. You may need API management both to consume APIs where the models are hosted. Because here's another good thing about gen AI from a consumer perspective as fantastic as the technology is. It's an API call, right?

It might require some nuance around prompt and you know, prompt uh configuration, but it's an API call at the end of the day. But also you may be exposing your end points out to third parties. So we at Boom, this is what we do, right? We provide these foundational capabilities.

And you know, one of the things that we do that I think is that we've always done, you know, we've been around, we kind of created the iPaaS category. Um we cover all these bases, we can help govern the data, we can help integrate the data and events. We can help automate the whole flow as well as having strength around application integration API management and even the ability to have UI based interaction when you need to have human attended interactions with the machine model.

So we have a, a platform that's foundational for both of those pipelines. Here is the rest of the transcript formatted for readability:

So if we think about where you know, where, where are we going, what do we know now? What can we kind of guess where we're going? Well, there are already some patterns that we're seeing emerge.

I talked about the agent model. I might put that sort of in the category of model chaining where you need to. If you've heard of LangChain or Lamma Index, there's there's tools that are sort of emerging around how do you do this type of model orchestration?

Um you know, dynamic automation agents kind of play there too. If you think about what used to be a very rigidly configured workflow that says if this, then that then that then that right now we can start to use models as reasoning engines so that if we empower them with the number of actions that they can perform and give them some some semantic tasks, they can determine what the best workflow would be.

You know, that's an emerging model. I think RAG, how many people have you ever heard of RAG? Retrieval? Ok. Cool. All right. RAG is making the rounds. You can tell it was named in a in a technical paper, right? This was not a marketing term Retrieval, Augmented Generation.

This is really how you contextualize your interactions with these models. So we actually, you know, I I at Boom I I lead a group, we call the Boomy Innovation Group. Uh and the team has been doing some incredible experiments around gen AI from the from the outset like for, for the for the full year um kind of went from idea to implementation very quickly on, on RA because I and it was, I remember myself, I was in a Slack chat with an old group of colleagues and somebody mentioned, oh there's this Retrieval Augmented Generation thing.

It was like a wildfire like within a week, everybody was on every tech site talking about RAG. So this is really about how do you take the information you have and work effectively with a large language model, right? Trying to utilize the strength of this powerfully trained uh model with the right contextual information that you have.

Everybody's gonna need to do this. This is probably like ground zero for useful relevant outputs from um from large language models of gen AI. So we actually came up with something that we call BRAG Boomi Retrieval A Generation. I'm here to brag about it, right?

And the idea is we're kind of looking at these things like LangChain and other tools and going, you know what that kind of feels familiar like taking this inputs that require contextual information from inside an organization doing a workflow that's then going distill and transform that data and then prepare it in order to send it off to a large language model and then use a vector database to store the embeddings and get the right contextual information.

So uh Chris Capetta and my team, one of the members of the team went to went to town on this and and created a whole Boomy implementation of RAG and he did it actually following this idea of the context pipeline and the action pipeline.

So what he did and I know we're hey, we're at re invent and I'm here showing this op open AI stuff. So bear with me, we you know kind of at the at the time taking in this templated process that would pull in this contextualized information, prepare the data call out to open AI in order to generate the vector embeddings that you require in order to store the semantic information and then stored it in this Pinecone vector database.

So this is all about generating the context for the interaction this was all done. No code using the Boomy platform. So that's building the context pipeline.

On the other side, he created this action pipeline that then you know, takes in more information uh uses the same open AI call to generate the embeddings, retrieves the information from Pinecone and then calls back out to open AI adding the contextualized information on the prompt so that the result coming back from open AI takes all of that context into into consideration.

So the good news here is with the releases coming around AWS SJAI stack and with the incredible flexibility of the Booming platform, he was able to unplug open AI plug in Claw on Bedrock, unplug Pinecone plug in Kendra for the for the embeddings and basically rerun the whole BRAG workflow on the full AWS stack.

So it's again, this is foundational technology. I want you to know that you don't have to throw everything out that you have today in order to get started in this space, like tools like our platform can be extremely useful in this new context.

It's going to take companies like us understanding the new landscape and giving you practical guidance like this. But you already on your way, this secret path that I talk about is to say the work that you've been doing on digital transformation. The work that you've been doing on developing composable architectures and integrating your data and systems is all reusable in this new gen I space, right?

And everything that I showed you today, um you know, I just skip ahead everything I showed you today. We have articles on our community that show you how you can take exactly those recipes that I just gave you for generative AI and implement them on the platform both with the open AI Pinecone example, as well as the AWS gen I ecosystem example.

So again, maybe gen AI is going to destroy the world. Maybe gen AI is gonna save the world. It's gonna take some time. In the meantime, let's go generate some business value with this new technology and let's ride the wave. All right. Thank you very much.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值