Cognizant cognitive architecture for generative AI: The path to MVP

Thank you. Thank you for setting that up only in Vegas, right? 50 people show up Wednesday morning to hear more about cognitive architecture. So thank you for coming in.

Over the next hour, me and a couple of my colleagues are going to walk you through some work that we've done that we think is relevant. It's going to be very appropriate to where we think most industry uh folks are in terms of their own journey with generative AI.

So I'm gonna kick it off. My name is Naveen Sharma. I run our data analytics and AI business at Cognizant. Been with the firm for about 15 years, um been in consulting for a lot longer.

I'm going to hand it over to my friend, Phil. Um Phil is the CTO for a, one of our Cognizant acquisitions, a firm called Inner Wisdom that focuses purely on AWS and generative AI so very, very narrow definition by design because we think there's so much to do in the AWS stack that we needed that specialization.

Phil's going to talk a little bit about the architectural approaches we have for deploying this and then we're gonna make it even more real. We'll have Nick come up, Nick's going to talk to you about how Williams Lee has deployed generative AI within their environment and how they're taking it out to their clients and consumers. So that's the plan.

We're gonna try and keep this fast paced and moving. If there's questions, we will try and take this in the end. But if we run out of time, please come find one of us outside or at the Cognizant booth and we'd be happy to engage and answer questions.

So the agenda is gonna look like this. We'll start off by talking about what is it that we're doing in generative AI? What do we think the opportunities and challenges are? What are the cognitive architectures? And I'll spend a little bit of time just setting that up uh before Phil gets in in terms of how we do this, the method for deploying this and then the last piece, like I said, we're going to make it real, right? So how has Williams Lee deployed this for their clients? Very, very quickly.

Does anyone know when Chad GP D was launched? The date November 30th 2022. Tomorrow would be the one year anniversary I have yet to see tech that has taken off at this space at this speed.

You all if you were in the JA I said the, the keynote this morning, you heard all of the focus, all of the energy, all the new developments that AWS has put into expanding their platforms, the capabilities with Sage Maker and, and Bedrock and all of the other associated um assets that are being enhanced.

We've done the same thing internally now. We're not a software company. We're a services company for a services company. It's important that we don't show up to our clients and say so what would you like us to do today? It's always important that we show up with a perspective on where do we think the space of generating the AI is going to do that?

We made a very deliberate move to not define the strategy top down. Instead, we went to our associates. So all 350,000 of them, we opened this up to them and said, what do you folks think are ideas where generative AI can be used to make an impact in Cognizant's business. But also more importantly in our client's business, they came up with about 35,000 ideas.

We narrowed it down to about 3200 that were purely focused on generative AI. And I'll tell you that we went through every single one of them, we actually called out the top 20 we built them out and to give to clients to get real world feedback and, and, and information back from them.

So that's one of the things that we started to do because we didn't want to show up uninformed. It was very grounds driven two, the slides actually gotten a little bit older already. The second bullet on the right.

Um we realize clients want to come in and engage and experiment and also learn from the work that's been done in this space. So we've actually launched a network of studios AI studios where clients come in, we show them the work that's been done. We help them get started with the art of the possible and then very quickly, right in a day and a half, we help them build out that plan POCs, the data is ready and then you take it out and say, you know, we're gonna roll it out to the enterprise, make it a little bit more real.

So that's the approach that we've taken, you can see some of the numbers uh in terms of the projects that we're executing. The demand has been phenomenal. I think there's not a single industry where we don't have conversations happening on how do we make gen AI real in our business?

The answer to that is this light. There we go. Oh, wrong button. There we go.

So we've, we've taken three approaches to it or three phase approach to it. The first is you cannot do good or meaningful generative AI unless your underlying data platforms are in good shape. You heard Swami talk about that this morning, right? The underlying data foundation, the quality of the data, the ability and the speed with which data can be read and written and the information that you have about that data asset is critical.

So first making sure that the data platforms are ready. If you've already done that, that's wonderful. We then move you on to the second stage. The slide is create models, we're not showing up and saying, you know, you should build your own LLMs that's look, there may be an edge case or two where you want to build your LLLM. That's fine for the most part though, what we've seen is clients are able to look at a variety of LLMs that are available, foundational models, third party models, open source, um they can choose the right model and the approach to meet your end results sometimes using a vanilla model as it works.

Other times you go in and fine tune the model. One of the things we're doing in our own health care business is we're taking a model that's been trained on medical data and we're fine tuning it to build out four use cases. So I think that's another approach that can be deployed other times it's just simple prompt engineering that gets you to that end state.

So depending on what it is that you're trying to solve for the complexity of the problem, the approach that we take changes. So that's the second part of the uh of the motion right picking and choosing the right model architectures.

And then the third thing that companies sometimes forget in the excitement is these things are wonderful science experiments. But business value is realized only when you deploy it within the business process. Sometimes it's important to start at both ends of this, right.

So don't spend too much energy in preparing the platform. Don't zip through that either. But at the same time, do not forget about the business value realization. It's very easy to do a science project where you show up in four weeks and show them something that looks fancy and the business scratches their head and goes, why would i ever want to do that?

So keep that in mind as you run through your journey, you saw how we spoke about the several 100 projects we've delivered some common themes, stand out, you know, the themes that i we we're, we're putting them up on the slide. i'm not gonna read them all out to you. But this will make sense when Phil comes out, you see all of the text on there in terms of capabilities and limitations.

We stepped back and we said, what are the common themes that we're seeing when clients are actually going out and deploying these use cases? So we've called out what we think are common cognitive architectures. It's not the easiest thing to roll off the tongue, but it makes a ton of sense. Um when we show you what it looks like.

What are these patterns that you can start to build out within your enterprise? So that every time you deploy a use case, you're not building it from the scratch. If you've already done text summarization, then something that takes that to the next step and composes writings is a meaningful and easy iterative next step.

So we'll show you what those common patterns look like. The intent is to make sure you don't rebuild from zero every single time you've got the base, the base framework laid out and you're just executing on those patterns.

So keep that in mind. One other thing that we see with clients is the need to be smart about this look. This is like i said, right, the technology is less than a year old, 364 days in common usage, but the underlying tech has been out there for five or six years. You know, that, that first white paper that described, i think it was titled knowledge is all you need or something like that came out in 2017.

So the tech has been around for a while, but the tech just didn't have the exposure. And one of the things that happened because the exposure was limited was that industry and folks like you and us didn't get the time or the atten or the focus to actually evaluate the risks of it.

So this is something that we think is important. We help on the right side, we help with those cognitive architectures. We can help you be a little bit more transparent and explainable you saw again, Swami speak this morning about making sure that you there's no copyright violations, right?

Um there's traceability back to where things come from when you're writing code, making sure code that you're putting into your system isn't copyrighted by someone else and then opening it up to uh opening you up to risk.

So those are the kinds of things that we've done some work on. We'll show you what that looks like to the left, the bottom left is uh where you see ethics and uh regulation and economic and social impact. That's the ever changing space.

Um about a month ago, the US released its guidelines on the fair use of AI the UK has done some tremendous work in uh in leading up to that as well with their AI summit, but that's an ever changing space. This is one that i think companies are not spending enough energy on.

Um i can absolutely see some, some firm of lawyers somewhere gathering together to launch a service that keeps their clients posted on the latest and greatest regulation.

So these are all the things that you need to think about. But like I said, I'm gonna now take it a level deeper. I'm gonna call on my colleague, Phil to come up and he's going to talk about these cognitive architectures these patterns that we have.

Phil.

Thank you, Naveen.

Uh good morning. Uh good afternoon everybody. Um yes. So I'm going to talk about some of the kind of architectural patterns and use cases and, and, and things. So we generally see it helping across quite a broad end industries as Naveen said, but it distills into one or two kind of areas of um use.

So one is automation of processors and taking in things like documents and processing them quicker and getting them available

One is helping developers be more efficient when it comes to, to creating code. And you saw a lot about Code Whisper in the last few days, no one's around customer experience. So we're particularly seeing a lot in things like email and triage that kind of thing, processing understanding emails coming into a business where they need to be routed and who needs to what SME needs to deal with them.

And then lastly, it's kind of enterprise knowledge and understanding all the data and where it may sit within your, your organization and uh being able to ask questions of that worldwide. We're seeing about 100 and 50 client engagements in gen a i and um up to 400 unique concepts, but it's still broadly down into these um main areas.

So um to deliver generative a i on amazon, we generally look at kind of two services. So the first one is bedrock and you all heard about bedrock today. So that makes available to you a number of foundational models. Um we've generally used most of them. That kind of the one we kind of navigate is to claude mostly, but generally we've used all the models and then there's also the saves maker option.

So when we look at these in real life, when we actually deploy them in production, we're kind of looking at a few um aspects of which model to deploy and how to deploy it. So if you're going down the bedrock group bedrock is a pay per token service. Uh so you only pay for the uh prompts that you, you put through it. And well, sage maker is a a per hour of gp u kind of thing.

So um today, they kind of announced llama 2 70 million parameter being available actually on bedrock, if you're doing that on sage maker, that cost you about 100 and $50,000 a year to run. So it's uh the price point is, is, is um significant. But the flip side of sage maker is also has its hug and face integration. So that also gives you a lot wider models that you can, can choose from.

So you can choose from flan falcon other kind of open source models through that connection and you can dive deeper and you can do a lot more in terms of alignment and refining those models when it's um inside stage maker but it comes to that, that extra cost.

So this is what kind of our architecture looks like. What we did was brought together kind of the best of breed of two kind of um elements of, of cognizant form this architecture. And we also based it on real world examples.

Ok. So um as mentioned, i'm from uh one of our specialist units in wisdom, we had a, we had existing um architecture called ramp. So that allows us to set up all the aws environment security for machine learning workloads. Then we brought in the cognizant architectures for, for da i to form this this one overall pattern. But let's focus on a few things about this and um kind of address the kind of the main um thing that we try to address in this architecture is privacy.

Ok. And control over where your data is going and where it's stored. So you'll see a couple of data stores in this, in this architecture. So we've got s3 in there to store any documents securely with its with encryption. So that's encrypted at risk. We have also a couple of other things. So you have a choice of vector databases. But in this architecture, we've only gone for pg vector. But again, we've deployed r ds inside a, a private network here, we've made sure the encryption is turned on. So that again, we know that your vector is securely deployed inside your, your network.

The interesting one to add to kind of the general architecture is and we saw a little bit of it today, but is amazon dynamo db? Um you can swap it out from mongo db and um document uh db d. Amazon manage service. Um what we're doing there is two things generally is one is to store conversation history.

So when you're building these kind of architectures, one of the most compelling things from the end user point of view, is it remembering what you've spoken about previously to these models and feeding that that back into the uh conversation? So you ask it, for example, um and when i've played with claude, for example, ask it who is jfk and it would turn back who, who jfk was. And then it's that next thing, it's like, well who was his wife? And then it will come up with jacqueline kennedy back. So it's that remembering.

So we use a lot of that inside dymo db to store the conversation history. And the other thing we do there. And this is the thing i encourage anybody really getting if you're gonna play is the real skills of the art of the prompt, but also being able to change that prompt based on real world kind of examples coming through, be able to tune it real time.

So in that situation, um what we've done is store those into dyn a db, so we can pull those prompts back, we can add new prompts in to, into the, the chain of events so that it allows us to kind of manage those things.

The other thing we've done from a security point of view in this is this architecture is that you'll see endpoints everywhere. Ok. Particularly to bedrock and to sagemaker. What that means um from a, from an enterprise uh corporate level is that your data go doesn't leave your kind of your network um estate.

Ok. So if you're going to call somebody services without that, you'd have to, you'd call them over the internet. Basically with amazon bedrock and using the endpoints, what you're doing is communicating from your, your network, your vpc out to uh bedrock via the private backbone of amazon.

Ok. So that means you've got control where that data is going. Also how amazon have deployed um bedrock is that you get an escrow copy of the model, basically your own copy of the model. So that means that your data is only going to be used by that copy and that only exists for you and they don't retain that for any fine tuning or anything on their part later. It is surely for your purpose, they won't retrain the model on that. So you've got complete control over it, not getting uh not the foundation, the visual foundation model, not getting your data that gives you that reassurance around where that your data will go up.

The other thing i would say um in all of this is the user experience is key and um the most compelling um experiences is where you've really thought about the end user and how to interact with them. And then, so what we do with this architecture is then we deploy the, the cognizant architecture in the middle to allow us to connect some of those to perform use cases. And i'll go through what those those are in a second. But we're kind of running that in, in lambda and ec2 depend. I'm sorry, fargate, depending on the um the nature of that.

The one thing i would say if you look at bedrock versus sagemaker is again, sage maker will sit within your network where you have a deployed instance inside your network whilst bedrock is escrowed outside your network, but you're communicating it privately. So if you want that ultimate control, that's really inside your network, then sage maker is your choice. But that security privacy um constraint comes at the cost of the per hour thing.

So what we try to do is is put all these building blocks together to allow you to choose where to go. So one of the use cases we've built with that is the, the knowledge um navigator. Ok. So this uses this is that ability. It's your own private jet, jtpt as it were on your private, on your private documents, your private data.

Um but you can be rest assured due to all the security and privacy controls you put in, in the overall architecture that it your, your documents and everything are constrained to your, your um privately to your real estate. Um so it just allows you to ask questions for those documents.

We've seen that across a number of industries. So we've seen it in cap markets where there's lots of regulation documents to process. We've seen it in some legal document cases. Um nick that will speak, we'll speak through some of this in a little bit more when he comes on in, in a few uh moments.

We're also seeing that in um and in the insurance industry, that kind of thing, you be able to query and ask questions of, of policy documents and, and that kind of thing. So it's quite a quite a vast applicability from for these things. And of course, you can uh do rag with it so you can ask.

So part of that conversation is, and we've seen it example we saw in this um situation again was um engineers um lift engineers and they've got um thousands of manuals for different lifts and different parts and different lifts. So we built a use case where they could ask um a question. It would understand the kind of the diagnosis of the faults and then go and look up the appropriate part in the manual and then come back to how to fix it where r a can also help in that sort of situation is if you need a part, you can say, ok, the is the park available at the depot or what it would take to order it.

So the engineer on their, on their, on their device can have a, a manual of how to fit it, know where to get the part from and the, and the diagnosis what the fault was. So that's a kind of a good example. But again, we've, we've really concentrated on the security and the privacy side of this, this this architecture.

So how it works. So, and, and you've just seen this a couple of times probably in some of the keynotes. Actually, um it's all about embedding some and vector databases when it comes to these kind of things. So the first stage is a prep stage where you take your documents in whatever format there are and they typically can be doc x, they could be pdf for example, and you extract the content of that document.

Um typically, how we do that is we will go to the textract and that will extract those pages from that document. There may be some preprocessing needed to get the form, the document into the correct format, but that's how we will extract it once we've extracted it, then we're gonna send it through the embeddings.

Um and basically what the bedding embedding process does is take that text and, and it was very much in the keynote this morning. Um we, we'll transfer that to vectors, numbers basically what the, the machines and the models understand. And basically that's a kind of a, a 3d array or when i say 3d, it's actually n degree array of vectors and the relationship between the words.

So the embeddings create those vectors and then what you need to do is store them into a uh knowledge knowledge database.

so that's where you can use pine cone. um an example i showed previously it's, it's in pg vector. um there's other ones announced this morning where you can store it, but you're storing basically your, your private documents in there. and the thing to store alongside them is the con the extracted text. so you can search for that text, bring it back and know where it's done and then also a reference to the original documents. and that allows you to ground out that answer of knowing kind of where that answer came from.

so that's all part of that, that document store. and then basically you build a a user interface to be able to query that. and then you're using another model. in this example, we're using claud to have that conversation with the individual and track the conversation, use the conversation database. and then once it's understood that it will then go and ask from the vector database, um can you find me all relevant um things that match this, this this vector and then it will pull up back and, and give the conversation response or it will um go to rag and that rag um as needed.

so, the next evolution of this and it, and it's quite um we spotted this very early at cognizant. um and then in the wisdom that this is gonna be a killer use case was the genitive br use case. and you saw this morning with a um amazon q in quicksight. um that this is another way of doing it. you saw, i think it was red shift this morning as well in in quicksight, um sorry, red shift for, for this kind of thing. it's where ok, we've done with documents, but there's also a lot of structured um data and businesses in these data lakes. and um i think the lady in the keynote spoke through quite elegantly. it was the last frontier. the last barrier for the business to be able to get insights was the fact that they'd have to kind of work with a ab i tool and understand how to wrangle that b i tool to get answers to their questions in the data that's constrained contained within their, they like they were like um house or their different various data lakes.

so in this architecture, basically what we're doing is using the meta catalog of the data lake and then asking questions and it goes and using that meta data um that meta catalog, it understands it. so again, we're using at this point a u i to um do the questions and handle that conversation and that will put it to another model. but what we're doing in this situation is we're putting inside the context all of that meta information from the goo catalog. so we can ask it where the sales data is available. but that can query any number of data sources. in theory, it can call redshift, it can call s3 and so forth. and then once it's pulled that back, it will generate the content and return the result back to the um user.

so in terms of architecture, this is this is what it looks like. those, those steps that ii i talked about, it's that you are going through that, that model understands the question. then using once you understand the question, using the con the meta to understand how to fill the answer and then running the queer. so it creates this sql at that point, runs the sql on the, on the data store and then pulls that back and then uses another um instance of the model to, to, to kind of turn it into something that a human would would understand.

so, so technical how we do it, um the thing that particularly um inner wisdom has prided itself on is taking people on the journey. ok. the technology is only part of the journey and the journey is about maturing a business um and enabling that business to understand the potential there and then how they can and take that enablement come up with ideas and then turn those ideas into reality.

so we use for the following kind of methodology is that we can go into a business and we'll go through what gen i a is. the potential use cases is and run that enablement. and that can be at all levels of the business from the board or they are anti individual engineers. but the idea is that what we're gonna do is create that awareness so that when we get all those creative juices flowing for the next stage, ok. and the next stage is um ideation. so coming up with all the use cases in the business that pay um regenerative i a could be used and uh to be honest, the, the broader the business, the better and um cos everybody's gonna, there's not a bad idea, you know, any idea is a good idea. so it's about surfacing those ideas and it may be that the long you've had that, that enablement, but it's somebody who's non technical that comes up with the idea. so you want to funnel all those ideas from all those people and work out the, the best ideas and the ones that really can make the difference to your business and then focus on those killer ideas that are gonna really focus where to, where to do it. and that means that in theory you can build the business case and everything behind it.

so the next kind of stage though that we also encourage is coming up with not just a um ideas but also some policies of when won't use. ja i, what kind of data um you're prepared to put through these models? now, i showed you a very, i showed you a very secure architecture um but it still comes down to, you'll have confidential information in your business. pp i data in your business, maybe uh pc i data in your business, confidential um data, i um intellectual data. so w what we'd recommend is c guys in all, all, all the ways you potentially could use, gen i and all those different kinds of data and interactions and in coming up with a a policy so that the people inside your business knows where you can use. gen i i, an example is if, if you like using code whisper, one of your policy may, may be, is that you can use code whisper to write the code but not the tests. so therefore who writes the test? so you've brought a balance into your um into the system? ok. and that's that who in the loop balance? so we see that kind of thing set out in the policy and then it's about taking the ideas, scoring those ideas.

so you want to understand the potential value that that can do to your business or what you're trying to achieve with gen a i versus the complexity. and it can be the complexity of implementation or maybe the complexity around accessing the right data. at the right time, there may be a, a data silo that you still need to break down. maybe there's some constraints around the data you're trying to access. so we'd recommend looking at at those kind of things to rate the complexity. and then once you have that score, those scores, you select those top ideas and then start one kill a proof of concept around that. and i'll go through that in a bit more, but then build out a road map of how you're gonna deliver that over the next um period of time because one of the challenges we've been seeing in this, in this crazy year that we've had is everybody's got into chat gp t, et cetera. everyone wants to play these ideas and they're, and they're bubbling up, but you do need some sort of strategy of when and how you're gonna do them because you're, you're gonna get, you have an adoption problem later on. so make sure that you have a plan of, of the most important cases when to bring them along and also how that works with your policy.

and then the next stage is then, so you've got your, your kind of your strategy, you know what you're doing, you've got your road map and then it's about actually implementing it so where we kind of encourage the next step is putting some of those essential controls down. and i showed you some of those in the kind of the technical architecture and there may be other things that you need to put down. so the organization and uh put the kind of the the essential ones you need, you might not for your first use case, you might not need all the controls. so there's a balance to be had there but put those controls in place so that they're there safeguarding you so that you know your, your data's not gonna leak, et cetera and then turn up the use case and that first use case and really explore it, discover it, look at the data. have you got the right data in the right place? the right format. has it been? do you need to clean it up a bit more? do you need to bring other data in to add to it? so that you've got more information available and um where we really focus on is the the business case? so you've click good idea. you've got the value but you really need to develop up and bubble up that business case. whilst the return on investment, if it's gonna take you three months to get it to production, you want to make sure that you're gonna get the reward. if it's gonna take a few weeks, the equation is slightly different. so you need to refocus on that, that business case and if it's not there, don't do it, don't proceed the kit and you gotta make sure you then focus on something that can be um delivered.

um so this is very much an irritative cycle of making sure you pick the right use case, looking at its feasibility and then building up the business case because you want to do is, is, is the business wants to see the adoption at the end of the day and then kind of prove it. ok. so create aaa proof of concept, you know, but using that some of our architecture um or your own, but create that initial proof of concept show what's possible. now, there's nothing wrong in my opinion of knocking something together in streamlet and lang chain for about half an hour to show something. but again, it's about showing to the business, the potential of the use case in an in a format they can understand. so really do uh do that maybe explore some of the different models available to you so that you can really focus down to see if it's going, going, going well and um really prove that value and ideally you'd have some sort of pilots, you'd have some sort of criteria that you're mixing against so that you know, that, that it's a winner basically, or if it's not, you can focus your efforts, but you haven't invested 69 months at that point of um, getting from, from a to a to b and then there's the kind of embedding betting phase and that takes a little bit longer, but that's about creating that mvp and actually getting into the business and integrated into a business process and doing that first real, real pilot with, uh, with customers and getting them to use it and you'll learn a lot more getting to that stage than you did in the previous stages based on our, our client experience. um but maybe, don't focus too much about all of the automation and all of the quality that, that you need to just do enough. um some of the kind of the language um lnl ops and some of that can come a bit later, but make sure that you kind of deliver that into the, the real world.

and once you've got that, that, that main um kind of n case available um in production being used, then it's about adoption and scaling it. so starting off another one or starting off the iteration, hopefully, by that point, you've established that cadence inside your business and you can drive the transformation. and um maybe one of the things nick will talk about is is these challenges that they've, they've ramped up and inspire their business so much that they want to do more and more and more, quicker, quicker. so, um it's good to get to that, that stage. um but you really want to get that, that adoption going at, at scale.

so more cognizant, ok. so we feel that we've got one of the most compelling offerings um to, to our clients in, in the, in the market. ok. we have wide industry knowledge, we have specialist sme s in most industries that can help you create those ideas, create those use cases, understand where um where you're going and where your business is going. um and the challenge actually to think where the business is gonna be in five years and building a use case that's gonna really null it in five years and or transform your business to get there in five years, not necessarily today's problems. so our experience in motor industries can, can help you with that.

and then there's governance and responsibility and naveen um touched on it again and the same thing is happening in the uk. there is government summits, that kind of thing to work on this policy. so, regulation is gonna happen and regulation is already there. so if you think about it, there's gdpr issues in the uk, there's the similar kind of data control and privacy issues of individuals over here. so it's understanding another sme s that, understand that. and for example, the eu is gonna bring out a i law um very soon about if the model is, if a model has a um impact on an individual that you have to really have that traceability and that auditability of the system. so we've got experts that understand those regulations and their applicability in, in different markets that can help you.

and again, so that the technical expertise, um i really think the skill actually in general, i is gonna come down to the prompt engineer and they're gonna have to know how to use the model really well. they're gonna have to understand the business context even more

Phil: If you can harness that a bit, then that's gonna be the killer. So we have experts in those areas that can work with our industry experts at creating great bumps that really do drive through that. And using the architecture, we, we think we've got a mature way of taking things from that pilot and all the way to an MVP in production. That's, that's nice and secure.

The other thing um we can help with is you maybe want to launch a product. Ok. So, ii, i honestly think that's where most of the creativity is gonna come in. The next like next three years is there's gonna be new products launched generally to, to the general market and they're gonna be a gen base. But the killer ones and we actually kind of learned that from chat gp t is to use interface, the simplicity of the user journey, the feedback in that process, the tac, that's what makes it tactile to the user.

And if you compare it to things like the internet and when that came along it's when you had search and all that stuff into the home where the adoption ramped up. So it's really gonna, I think we're gonna focus more on the experience than actually necessary the technology, um, in the next few years as we, as we move on. But for us, it's still comes down to right now. Is, is the business ready? Is your business ready? Is your, is your opportunity ready and critical that that process is us understanding how your business operates.

You know, you might have to go through a, a change management process to change the way your business functions to be able to exploit where j is gonna take your business. So you're gonna have to take your people on the journey as as well. So we could help you with that, that, that organizational change management piece.

And what we can also do is just look at the readiness. You know, i often if you're not ready to actually start in the geni journey right now, we can look at things to help you prepare for it. So make sure you've got your cloud um foundations in place, you know, your cloud strategy in, in, in, in um in place and then also um your data strategy. Have you got data strategy? Have you got a data foundation? Because guess what if you're gonna use all this data, you're gonna have to store it somewhere. It's gonna have to have governance controls, especially if in regulated markets, that kind of thing.

So you're gonna have to focus a lot more about that. So i always say focus on the foundation as well because then you're prepared for what will come next in a few years time. So, um, that's enough for me. I'm gonna hand over to um our client nick from williams lee ct at williams lee. You just going to talk about how they went on this in the, how they went on that journey with us.

Nick: Thank you. Good afternoon, everybody. Um so firstly, thank you to cognizant for uh inviting me here and to tell you a little bit around our story and how we mobilized our enterprise to adopt generative a i. Um before i do that, i'm gonna give you a little bit of context um around williams lee. Um then what i will do is i'll take you through our journey. So how we mobilized our business drove excitement generated the ideas and i'll take you through how that then transpired with what were our challenges, our outcomes and the lessons that we learned.

So who is williams lee? So williams lee are a um business process outsourcing organization. We focus on three sectors. So we focus on the legal sector, the uh financial services sector, predominantly investment banking and the uh consultancy sector, all predominantly regulated environments where data is highly sensitive and that you have to have um strict governance around who sees what um which interestingly makes training models quite difficult as well.

We uh we have around about 7000 employees and we are um we, we, we essentially cover the globe. So we have um business in, in the uh in, in the us. Um we have a business in the, in the mi and in a p a and we have our operational centers in india as well.

And what we do, we offer our clients relatively flexible or very flexible operating model so we can work on site. We work on shore. So we've got two sites in the us, wheeling west virginia, columbus, ohio, and we've got a site in leeds in the uk. And then we offer offshore services as well with three sites in india.

Um and what we do for our clients, um like i say, we business process outsourcing, but we have highly skilled knowledge workers working on things like merger acquisition, research, pitch books for investment banking firms, document processing, proofreading, transcription for um for the legal sector.

Um so, and, and how this all really started was we have a track record of deploying technology into this sector. Um we're in a aws shop predominantly. So we have a social proprietary service management tool manages all our workflow globally. Um and it manages our workforce, that's our engaged platform. It's fully hosted in aws. It's got a data warehouse, we utilize red shift and all our data visualization dashboards reporting comes off that.

And we've also uh digitized our document, intelligent document processing capability using things like text tracks and comprehend. So we've got a capability of deploying technology and regulated environments to ultimately drive productivity, efficiency and value to our clients, what we didn't have, however, we didn't have any generative a i capability um and go back to 30th, november, probably very few, few people did.

Um so this is now how we sort of mobilized our business to go on that journey. So i'll take you through these steps in a little bit more detail. But ultimately, we had a mild panic. And then we started with what is our a i vision part of that was again mobilizing our organization. So we're an organization of 7000 employees, um who knows best, what our client needs are, are the people that work with our clients every single day.

So we wanted to really engage with every single employee about how can we utilize this great technology to drive value. Um we then formed what we just called our poc team. So this was a relatively small but highly empowered, very highly sponsored team that can then work with those sme s and ultimately de deliver proof of concepts really quickly fail fast or move on.

So we, we established this very nimble um decision making process. Um and one of the keys that was actually picking the right partner, picking the right partner that could help us go on that journey and worked in a very similar method to what we did.

Um so our, our vision was, it, it, it was interesting really, it was quite broad. It was we know large language models and gen a i will disrupt our business. It wasn't really in doubt the question was how quick and we wanted to be right at the beginning of that, we wanted to understand how it could disrupt our business, how we can use that as an opportunity and how it can drive further growth and value for our clients and, and ultimately for our business.

So we wanted to embed a i large luggage bundles in all our core service offerings. They wanted to understand how that would impact us. So where can we drive new revenue streams to really take the upside of this and minimize the downside and now and, and, and therefore it says, so where, where do we focus what technology is and what tech and, and, and do we buy, do we build we partner? So it was a relatively broad a i vision n nothing was off the table.

So i suppose phase one really was all about educate and excite our business. So um first thing we did, first of december, quite possibly was we sat down with our co and we said we have to make this available to all our employees. So i know a lot of organizations, especially regulated environments um sort of restrict that capability. But our view was, let's get a really simple policy that everybody can understand and let's encourage the use of it. But in the right way, so we set the policy, we set the, um, you know, the, the rationale and the reasons for how and when you can use this and then we just fed it into every communication, all our monthly sort of um, updates with the ceo all had what we were doing on our gen i.

And one of the good things that, that, that really sort of engaged the whole population is we, we launched a contest, it was called aim for the stars. Um and it was really crowdsourcing these ideas for how can we utilize generative a i um to ultimately uh drive value into our clients and into um into wesley.

So we ran this for about six weeks. We, we got a significant um input from all levels of the business and some really, really good findings. We found people that had never touched a line of code in their life. They used chat bit gp t and they actually built a working tool in things like macro bars and those sorts of things.

Um and we really drove that and we put real money on the table. We said there's gonna be money for the, for the top three. And if your, if your product makes it to production and ultimately goes out to a client then they'd get you a further share of, of that value.

So we ended up with all these really good ideas. And then it was ultimately, how, how do we f filter through that? And then, like i said, as part of that, we created this small cross functional team that had, we, we sat every two weeks and we still do.

Um it's got the chief exec on that um poc team. And ultimately, we update, we make decisions and we stop programs and we, and we approve investment and it's a really agile way of just keeping the momentum going because a lot of this was about pace.

So, um what were some of the popoc team principles? So first thing is whatever we do, it has to work in the real world. Um you know, we with limited time, limited budget. Um yes, things won't make it past prototype or pc or mvp. Um but ultimately, you want to maximize the percentage of, of that happening.

So it was really about sitting down with our clients and understanding where are their pain points? Where can we um invest to uh to create these capabilities? And then it was about focus.

So um really, you can't handle certainly a business of our size, can't handle more than, you know, 3 to 5 ideas at any one time. So we really wanted to focus, we got loads of great ideas, but it was really about focusing on the first tranche and then moving on.

Um and i did hear a good quote actually, um i think it was a steve jobs one that says focus is easy is all, is it if all your cutting are things that you don't care about. But if you've got an idea that you care deeply about and you cut that because there's a better priority, then that's true. focus.

So we had lots of ideas that a lot of people were bought into, but we had to focus on the future.

So then making those early decisions and importantly, using data to drive those decisions as well. So you know, understanding what would the ROI be? What's the market share for this? What price point can we can we charge for it? What is ultimately the business plan around some of that?

And one of the, I'll come on to this a little bit later. But one of the reasons why we chose Inner Wisdom and Cognizant was was their process to get you from idea to what we call proof of value. And in that proof of value, you, you have a working prototype, you have um ultimately a um an ROI model. So you start talking, ok. So what are the costs of running this on our ABA environment? What support structures do we need behind it? What price point will it be? How quickly can we get clients to adopt this? And ultimately, what does the revenue look like over one year, two years, three years? And, and we can do achieve all of that in eight weeks and having something where you've got a demonstrable model with a business case. And ideally with two or three clients that are bought into, it is really, really powerful when you could start making investment decisions with your investment committee.

So that was one of the reasons that we, we, we actually went within her wisdom because that process, that rapid prototyping business case uh model was we, we found really, really valuable. Um obviously, as part of this, you need to get your stakers, stakeholders bought in. So if you're doing operational efficiency, get bought in, if you're trying to sell a new service line, get the sales team brought in because there's no point creating these technologies. if the people that are ultimately having to um you know, put their energy behind this, don't believe in it.

So, so this cross functional team that we, we created, it had sales, it had marketing, it had s had finance. um and obviously technology. um and, and ultimately, it was about getting the energy of the business around these handful of focused um new capabilities. And then it was all about setting a baseline. So understand where are we today? um and then as we launch continually mapping um those benefits and, and ultimately, then go back into you into product life cycle management, then, so you're then into listing clients, new features and driving forward.

And so, yeah, so, so we feel that that small agile team was, was really successful. We've churned a number of ideas. Now, a number of actual live products that are out in the market um through that process, key part is obviously celebrate quick wins. A lot of people are putting a lot of energy into this and actually, it's quite tough, but creating a new product is really hard.

Um so celebrate your quick wins over, communicate all your progress and results because people don't hear it first time and everybody's busy in a corporate world. Um if you're going to fail, fail quickly, pretty common sense really. Um and don't be afraid to change direction. So and, and the other one is, is get people to focus on what their strengths are. So we've got some really good subject matter experts that understand the legal sector, understand investment banking and understand our workloads but might not understand generative AI or vector databases, large language models.

So yeah, bring in the, the, the the specialism as you need it. And then what, what are some of the challenges? And and I would say that these challenges continue today. So finding the right use case is really difficult again. So yes, we've, we've lost a couple of products, but finding the right use case is, is that's the sort of silver bullet that we need. You know, that's the secret sauce because if you can find the right use case that's gonna get adoption that is deliverable and will drive value.

Um then, then ultimately it, it will succeed the other thing. And, and i think that's why there's so many people in, in, in events like this is getting access to the technical know how. So i don't know how many new announcements they've been this week but trying to keep up with technology, trying to keep up with how do we architect these things? What are the best models to use is, is changing daily? And i have the um i suppose the the the benefit of having been a very, very engaged chief executive. So i get pinged by emails. Have you heard this announcement? How we're using this model?

Um and it's great because we've got massive engagement but actually as a technologist keeping up with technology that is scalable and secure and will get signed off is, is, is a challenge and having people that are naturally inquisitive and willing to experiment, uh you know, is um frankly, there's just not enough of them.

Um speed to market again about getting that concept of a minimum viable product, get it out to market quick, get the feedback and then drive the road map. So speed to market is always a challenge. Resource availability comes back down to focus. So there is always more demand than supply in any function. And this isn't just about the technical resources now. But if we are building a capability for our document processing teams and then 90% utilized across client work, you're not gonna get them on a project. So it has to be an entire business decision that you're gonna prioritize these projects.

And then client adoption, understanding the markets we work in, don't historically move fast. So to get client adoption, get, if we can get an anchor, client, get people to really start to beat that pilot client and shape the road map from very, very early doors. Then you're more likely to get client adoption. Once you've got one, you've got a case study, you can then drive that forward.

And then ultimately, for us, the commercials are, you can be a challenge. So historically, business process outsourcing charged by the FTE charge by the hour, implement automation, artificial intelligence. All of a sudden i can do things in minutes that used to take me hours, but i charge by the hour. So we as an organization and a lot of BPOs and services organization having the same challenges. How do we transform our commercial model alongside transforming our operational model alongside introducing new technology?

So there's a lot of change all to, to take on there. Um but it's doable. Um so ii i men i mentioned uh we, we partnered with Inner Wisdom to build a number of our um MVPs and, and i think the, the key thing um that really sealed the deal for us was this and the framework of moving to proof of value really quickly, the same old, you know, adage, you know.

Um was it time kills deals? Yeah. Well, times kill projects as well. Um you know, the faster you can get stuff in people's hands, the more chance you've got of really getting things off the ground. So having that eight week program to go from idea to a working prototype with a business case and a commercial model is, i cannot overestimate how valuable that is to go to an investment committee and demonstrate something.

Um obviously AI expertise, um it will continue to be a bottleneck. Um and the cultural alignment really, you know, we, we do work in a, in a fast paced environment. We are agile, you have to sort of roll with the punches from time to time. So you need a partner that will do that with you.

So what, what, what was the outcome? I hear you all ask. Um so we, we've actually launched six new capabilities um in the last 18 months. So one every three months, we feel that's pretty good for an organization of um of our size, especially in the markets that we work in. And we've got a further four that will be released very, very early next year. And actually, um so q 4 2023 we're in pilot stage now with two clients on, on uh on one particular product. So it'll go general availability in, in january.

Um we do have a very healthy pipeline of new services and i suppose that the, the challenge here is getting all these ideas in, especially when you're crowdsourcing from your organization and keeping their engagement and motivation when you know, you can't deliver everything. And that, that is a real challenge for an organization. I'm not, i'm not sure we've overcome that.

Um but we've got lots of ideas and we, we try to keep people fully engaged. Um obviously, we've got things like increased operational efficiency. We do have new revenue streams that are tech enabled away from the old commercial model that are now um bringing in value to, to Williams Lee.

Um and the other thing is actually changing the perception of Williams Lee from a human capital BPO business to a tech organization and an ultimately um increased profit margin.

Um so they were the outcomes. Um there's still a long, long way to go. Um and, you know, like i say, we, we've, we've got a good foundation. Um so I'll, i'll skip through these because i think i covered a lot of these in the outcomes thinking about it, but, you know, make it a priority.

Um and that has to be across the entire business that not one part of the organization can do this alone. Those cross functional teams are critical. Again, it really drives ownership in every single function to get these things off the ground and that cannot be underestimated partner with specialists. Goes without saying there is, it's a minefield, an absolute minefield of what to use when to use it, how to use it, how to keep it secure, all those sorts of things.

Um i'd say empower the team, but also limit the size of those teams, really empower those people to make the decisions and don't make decisions by a committee.

Um use data. We're quite lucky. We've got quite a good sort of data, foundation, data and governance. So our metrics are very clean and, and we can make decisions on that.

Um and ultimately, these frequent check ins um and keep clients at the center of everything. And i think that's been a Williams Lee ethos for a long, long time is that the clients at the center of everything that we do and if you do that, then you're more likely to succeed.

So that was sort of, i suppose the ja i Williams Lee story so far dot dot dot there's a lot further to go. Um and with that, i'm going to invite Phil and Naveen. Am i just Phil? There you go and say, thank you for coming and talking about our joint successes together.

Um we will be available 5, 10 minutes outside for any follow up questions. Um but thank you all very much for attending our session. Thank you.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值