Navigating the growth frontiers of generative AI

Good morning, everyone. Hope you're having a great day. Uh hope you can hear me. I can hear you all. Ok, great. Thank you. Uh I actually lead the practice at Impetus. Impetus company works well with AWS. And today I want to be talking about navigating the growth front years of generative AI.

As you know, a lot of things happening with generative AI. And what we want to bring to the front today is how to kind of navigate the step by step process because there's so many options for enterprises to go with. And as part of this, I want to start with the quote from uh Swami Sivasubramanian from AWS because uh the main thing is that your data is a differentiated, especially for enterprises, models are available. There are a lot of models and a lot of process and steps are available. But what really differentiate you differentiate you is your data and the more value you can create with your data by building on top of it, you are actually better positioning yourself for adding value to your end customers.

So as part of that, what I want to talk about is the implications for generative. If your strategy is not right, if your data and infrastructure and other pieces are not harmonized, so you will miss the market opportunities there are and that's a big challenge because right now jenya is really, really catching fire and there are so many things that enterprises are wanting to do. But in order for you to do that, you want to be able to kind of create use cases. But if you are not able to get use cases, then they won't be successful.

So in order for you to be able to do that, you have to be also able to scale them. And as part of the scaling process, we have to make sure that all these steps are done in a very, you know, step by step process. And this is what I'll be talking today about to kind of create the whole process in a process in fashion and as part of this step.

So what we are kind of proposing is a five step, you know approach, which is kind of to kind of take the enterprises from a low level of maturity and readiness for j to a level where they can actually build and safely securely scale generative applications. So this includes pieces around creating a strategy for all of your uh generative applications. What do you need to do to bring your data and and infrastructure? So how do you kind of go planning for all that. So that's the first step and then to go to the next generative i foundation and beyond that, add generative, the responsibility i pieces on top and contextualizing and enriching.

So I'll be talking about each of these very briefly, but each of these pillars bring certain value to the whole process and you can interchange few of these depending on what where you are and some of the leaders are probably further ahead. But a lot of enterprises are still in the early stages of adoption. And as part of the next steps, I'll be talking about each of these steps in more detail.

Uh so the first is general a strategy and architecture without this architecture in play, what you would end up is is a family a process where you cannot, you know, scale the applications further down. But today, a lot of enterprises are in in in a stage where they have a lot of data warehouses, a lot of components in silos. So bringing all of those pieces and creating a platform to kind of get you to the next stage to help you do that.

So we actually have a way of building a maturity model and doing an assessment and that helps you also prioritize the use cases and kind of build the guard rails on top of that. So what are all the pieces i need to go into this to create your strategy and a very highly high performance architecture to kind of enable you to select different s because you also have options between proprietary s and also open source version. So you need to kind of choose which ones to go with.

So in this space, we are actually doing work with, you know, creating a strategic plan for a large banking institution. That's part of the steps to kind of enable them to kind of adopt j but to get there, they need to be able to kind of plan out all of these steps. And this road map and assessment is what is kind of going to help build all of that.

And as the next step for you to get to the next stage of enabling you to kind of create the use cases, we first need to bring all the data together and that there are multiple data sources in the enterprise and you can start small, you can actually do things even on small data sets. But if you don't do this at an enterprise scale, the scalability becomes a problem later.

So to help you get there, what we can do is unify all the data ate for generative a. As part of this, you need to look at data, you need to look at data quality, enhancing all the steps and also integrating data sources. So we have accelerators to help do this with data quality enhancement scheme optimization and various other solutions.

So we actually can also help enterprises can take the existing data sources and facilitate the overall a use case deployment. The next step is also the deployment requires you to kind of also take into consideration many applications that existing in the in their existing platforms. So how do you integrate all of those and enable you to kind of bring all the existing data sources and all the context of the business that is using those applications.

So as part of that, we have a capability to bring a reference architecture and an accelerator for creating a data platform. So that is an accelerator that you can use it one click to essentially create all the structures required and using that with the large airlines uh to kind of deploy their the use cases that they are building a bedrock and the 50 plus use cases we are building for this airlines.

And the last step i want to cover here is about the enterprise readiness. So as part of that, how do you kind of create the context? So this requires you to kind of uh integrate existing data and build capabilities to easily find the data you're looking for.

So here we are actually kind of creating search with retrieval augmented generation, which is right and that we can actually drive with aws and the integration is something with l chain other sources. So we can actually create a sequential set of applications to enable our customers in this space.

We are actually doing work with, with a large financial services organization for payments, enabling them to do semantic search and uh using transformer models. So as part of this whole thing, the next step is governance, right? This is a key thing.

So there are a lot of things that you can do unless it is done, right, unless you are bringing the uh the aspects of buyers, handling privacy and security. So it becomes a challenge to allow you to do that. We have capabilities to ate data. We also generate synthetic data with generated adversarial networks and lnm and also kind of be able to kind of do reliable, you know, do solutions that can a short to kind of generate the right results.

So each time because these are generative processes, we want to be sure that they are giving you the right responses each time you ask them. And the next step is ethical a i and uh to remove bias in the data. So there are processes to do all of these. And the next is the regulatory address frameworks.

So here what we do is kind of add the ability for enterprises to deal with compliance issues and also be able to kind of address regulatory requirements and also be able to kind of look at transparency and accountability issues. So each of these steps are generative and responsibility.

So these are things we are working with many uh you know, customers and to kind of deal in the regulated environment, what needs to be done? What are the pieces of the puzzle that needs to go into one place? So there are various things that go under each of these which we can talk about about.

So uh that's one of the other pillars. So the fourth pillar I'll talk about is contextualizing and enriching ja i, right. So for you to be able to do this, we have the capability to kind of deal with prompt engineering libraries because today s give you the right results. But unless you ask the right question the right way and you have the ability to repeat that prompts and optimize them. It becomes a challenge.

So prompt engineering elaboration optimization is something we do in that area. I talked about synthetic data generation and vector search is another important factor because you want to be able to create the vector databases, them up and be able to kind of search through all of that quickly.

So setting up all the infrastructure for all of that is important, also enriching the data is a priority for many applications. So here we bring the concept of knowledge graphs and be able to kind of write in the right way to build a knowledge graph from your data. And also add that on top of the existing solution.

So solutions, let's query the graph and get the information. So now the next next key puzzle is tailoring all of this for the industry requirements. And here you can actually build those ll for specific applications in your target industry and kind of create the capability to drive uh you know, specific responses.

And here we are working with the large financial services organization where we are helping, we are kind of helping them kind of identify the right responses to talk to customers, primarily for sales agents to be able to give them the right information.

So unless you are able to identify the information that the customer is looking for, the conversation will extend longer. So you have to be able to kind of find the right information at the right time. This is what we are doing in this organization.

So under sca scaring aspect, we're working with uh deploying and monitoring the excuse me. So deployment is very important here. So we're working with all the enterprises to kind of help them, use all the applications and also you can actually use uh build, use cases and deploy them. But the challenge is how do you industrialize the development? Because unless you are able to create just a quick process, i thank you. Excuse me.

Mhm awesome. So unless you are able to kind of quickly build and deploy the applications and industrialize them, it becomes a challenge. So this is what we are doing with many of our customers.

So we also are actually involving the measurement, the performance tracking, which is something that is very important for lms today because you are able to scale these applications. But unless the llm that you're using the performance is matches what you need. it, it becomes a challenge because it leads you into issues with hallucinations and many other things you have to try continuously.

So as part of this, we are actually working with the large airlines to kind of build applications on aws bedrock, amazon bedrock and build 50 plus uk is that we are taking to production with these airlines. So this is something that's very important here.

And the next step is talking about various use cases where we can actually talk about uh you know, chat bots and conversation agents which we are doing with many customers, which is a key part of a lot of things that people are using llm for.

And as part of the next step, we are also working with customers in the area of personalization and content generation for sales marketing. So here content generation is very important and uh uh you have to be able to personalize and even fight hyper personalize that which for applications. So this is something we are working with. One of our key strategic partners on.

And code generation documentation is a piece where we can actually use you know code whisperer and various other solutions to kind of generate code and also generate documentation for the code. So that that becomes easier.

And the next step is around product discovery in this generation. So essentially being able to kind of look at all the product offerings that you have in your organization and kind of create, create content for all that and be able to generate that as the users request you for the information.

Enterprise search is another important thing because it goes in front of every other application that may be used in enterprise because you want to be able to search through information effectively and to be able to do this at the rag. And l as i talked about earlier is very important and be able to scale this.

So you can actually search through all your enterprise data. And as part of this, we are working with many applications, uh many enterprises on research and report generation as well because this is an important aspect to be able to pull all the information in just that and make it available to llm for kind of generating the right reports.

And we are doing this for insurance, banking, financial services and health care and so forth, claims submission management and reporting is another area that we are working on because for insurance companies, this is a very important aspect to be able to kind of look at claims report, figure out what is the way to kind of reduce their, you know, the leakage through, you know, false claims and kind of be able to kind of track that.

So there are a lot of manual process going involved in looking through all of these and to be able to to kind of automate all these processes with generative is very powerful for these organizations translation contain localizations. Again, another important aspect we are working on and we have delivered solutions for customers in this space taking because in fact, for some customers, there are locations in different geographies.

So be able to kind of tailor the content for those local, local local is an important aspect. And there are many many many other you know, applications and use cases which i'm not talking about given the time but be happy to talk about more as we if you can come by our booth.

And uh there are various other principles also to getting started with gender tv. Because the main important aspect for getting gr ta i right is getting your data estate in the right spot. Uh today, what happens is you have if you have a data lake, then that's one first step. If you're not, you have to kind of bring all your data viruses into a place where all the data can be integrated and uh kind of be made available to the llm. Here is the remaining transcript formatted for better readability:

But you have to address the privacy and security concerns as we talked about. And be able to kind of start small because obviously is a big journey and it's almost like the way it was at the.com time, there's a lot of things happening now, some mistake, some may not. But how do you kind of make sure that you're able to scale these applications?

So be able to kind of create that culture within your organization to understand the capabilities that you can, you know, unlock an innovation that can be created. So this is what we can help customers with and also be able to kind of validate and test all these models and also be able to kind of set up the due diligence and guard rails and all of the pieces that go into creating a powerful gene application.

And we are seeing many of the examples of this. So the main important thing is there will be failures along the way, but it's necessary to be able to experiment but learn from each experiment and kind of create the next steps.

And one thing that really caught my attention in the keynote session this morning was Swami Supplements point on the data foundation being enriched by the AI models themselves because it's a fuel that drives the whole cycle. So this is a fly wheel that kind of drives the process. But if you don't get the right, the flywheel will not be able to run well.

So that's the key part and to help you navigate the journey a journey with Impetus. We have various pieces that we can bring to help you get this process. The first step is being able to do a readiness assessment, which can, which can help we can help you do. And that kind of puts in place a maturity model and creates a road map for you to take the path from a traditional M to future and to help you get there, we have a data in the labs structure where we can actually work with you to kind of build a quick solution architecture for solution in the design lab.

And a short you know build lab which can kind of show you the value of that solution. And example is for example, could be an enterprise search or something similar to kind of create the proof of value for your organization. And this brings in other pieces around the generative air services where we can actually bring in the data unification piece to kind of drive all of the generative applications, the LLM selection.

And because there are lots of LMs and now we are also going to be moving into multimodal models and also you know vision models and so forth. So it becomes a challenge. So you have to be able to use all of this knowledge and also drive this through the LLM maps to for scaling them.

And finally kind of enabling you to kind of build these applications at scale. So we have proven frameworks and accelerators to help you drive in this process. And we are actually we have a booth that you can come by and talk to us. It's, it's booth number 981 and it's also a uh the 10 a i believe.

So to summarize, I want to, first of all, thank all of you and also want to kind of highlight the aspects around the five pillars i talked about at the front, which is essentially creating a road for your whole structure. And then going to the data strategy and enabling you to kind of unify all your data, bring everything to it in one place and then create a responsible AI layer on top of that and then kind of enrich and contextualize or AI and finally scale all of this through the overall processes.

So I'll be happy to take any questions. Let me just stop here. Any questions I'm happy to answer.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值