Generative AI: Keeping it real

Good evening. How is everybody? Having fun? I think. Can't hear me. 12, good evening. How is everybody? Um have you seen the sphere go inside? It's fantastic. Apply a.

So this session will be about generative AI. And we have the pleasure to speak to uh George Jacob from Camp Gemini. George Jacob is a veteran in the business. If I say Watson, does it ring a bell for you? So George will talk about generative AI. How do we keep it real? What are the key topics that customers that you and we here in the field? And how can we address them to make sure that we have the right approach on 10 solutions?

George over to you.

All right. Thank you, Tom. Hello, everyone. Welcome to the session. And I feel like this year's Re:Invent name is very appropriate for re invent because we are all trying to re invent ourselves with this generative way I think going on, right?

So I'll skip my picture here and I want to talk a little bit about what we are want to cover here, right?

So we have all been in the tech space we have all used AI in the past and let's take a look at what is practical and what is real in the generative AI space and what we have seen from the different projects that we have um we have been doing at Cap Gemini from various clients.

Now, all of us have played with generative AI. One thing that I realized is it is really cool. It is freaking cold, right? So we are all ready to embark on this journey. And these numbers actually show how prevalent generative AI is the big text that we know took years to accomplish and generative AI took less than two months to achieve 100 million users. That itself shows that this is going to adapt pretty fast and we all have to adapt with it.

So those who are in the tech space, which most of us are and in the AI space, we all know what classical machine learning is. I'm wondering like one of my colleague is sitting here. I was wondering, when did we start calling AI as classical AI? Right? I did not think that's coming in a few years ago, but now we are calling it classical AI.

So music is one of my favorite topics. So let me use an example of music, right? You feed a lot of Mozart music to an algorithm and also some other music and tell that these are Mozart music and these are not Mozart music. Now you ask the algorithm, whether this is mozart music, it will tell, I think this is mozart music or it will tell, I think this is not a mozart music.

Now, we can extend that to predictive and all of that stuff. But simplistically speaking, that's what the classical AI is. Now, when you go to generative AI, what we are doing is we are loading all kinds of music, everything that you can imagine and asking the algorithms to learn every intricate details of that music, whether it is the composition, whether it is the style, if there are musicians here in this audience, they can go at length on that. But every one of those aspects is learned by the algorithm.

Now, what you can do is you can ask the algorithm or orchestrate these algorithms in a way that it can generate the music. So you can actually ask it to generate a music that sounds like mozart style or even go to the next level, take a beethoven's fifth symphony and say that play it like mozart, right? All that is possible.

Now having said all that we might know all these things, but I also want to give another angle to it depending on the objective that you are trying to achieve and the amount of data that you can use to achieve that objective. You may not always have to build a humongous model like the GPT four language model. So that is some thought that I want to leave it with you.

Then when you want to build or when you are talking about large language models, there is a lot of things that is going, the language is so humongous. What what happens there is these models are not always well explained or well comprehended. Probably. So with that, when you build solutions on top of that, it comes with an inherent risk of unknown. So you want to be careful or aware of that the other items that are listed, you are probably familiar with that. You know, we dealt some of this with even the classical AI space. But one last thing I want to highlight here is that when generative AI is giving you a response, it may sound very confident and looks accurate, but there is no guarantee that it is correct. So the correctness is not guaranteed. So you want to keep that in mind when you try to apply this for use cases, when we get projects on the generative in the generative AI space, we kind of put them into like different buckets.

And one of them is what we see here as a conversational knowledge system where you can interact with the knowledge base, ask some questions or even orchestrate a bunch of prompts to automate certain things, right? We have done some projects in terms of reviewing contracts, right? Those are those are good examples.

Then the next one is you're actually generating content. I mean, we have seen so much art in this space, right? You know, social media cannot, you cannot open any social media without seeing a generative AI art. So that's another type of thing. But then what else can you do? One example that I use is uh contract drafting, right? That's a great example and very efficiently you can use generative AI to do that.

Then comes this other bucket, what I call as the other bucket, right? Every time we get a request, we have to pause and ask a bunch of questions ourselves. Is that something that generative way I can do is generative AI good at numerical stuff. Good example is optimization, right? We have requested. Ok. Can we optimize my supply chain with generative AI, I'm like take that question and turn it around a little bit and say what part of that can generate the way I do because large language models again, like we talked about they may not be great at accuracy or even great as number or numerical computing. So we need to keep that in mind.

But as a practitioner or as a decision maker, what we need to think about is are we hallucinating not the lms? Are we hallucinating that all these models or all these use cases can be achieved by large language models, right? That's something that we have to ask ourselves.

Then the question is there are certain type of use cases that may be beyond language. So then the question is, is language model enough or do you want something more than that? Right to address your business specific problems? But when you embark on this, keep in mind that depending on the value that you are trying to generate, you know, the complexity could be more, there are cost factors and other practicality associated with that. Here is the rest of the transcript formatted for better readability:

The other topic that I hear is let's use generative AI create a lot of data so that we can use or mitigate the lack of data in machine learning, right? Yes, probably we can do that but ask a bunch of questions to yourself. Do we know what the inputs are? Do we know what the outputs are going to be? Is it predictable? Right? And all these questions will lead to a lot of ethics and other risks that are associated with that.

A good example is um if you are trying to use generative data to machine learning or other use cases in health care or legal systems, you want to be aware of this because the models themselves do not have inherent knowledge, right? They do not understand ethics or they do not understand culture. So we need to be aware of that.

Then last but not the least. If you are trying to generate data for machine learning, you need a lot of data, which means that you are going to use the service a lot, which is a factor for your cost, right? That's something that you want to keep in mind as well.

The next conversation is we are going to replace all the programmers with automatically generated code. Can we do that? Right. Let's think about that. If you are going to take some quote from the available publicly available models, then does it align with your architecture? Does it align with your standards and patterns that you have defined? Now? Can you make the model do that? Yes, you got to probably do an additional step of fine tuning and things like that, put some guard rails and things to achieve that and then you can probably ask the programmers to use that.

Testing is another space. Yes, we can create test cases, we can create test data. Then the question is, does it cover all the scenarios? Does it address privacy? Does it address security? All these cannot be ignored? This the the models are not a trick solution for us to replace all the programmers? But we need to think about how we are going to use that.

But the most important thing i feel is the maintainability of the code as well as the employee skill. Imagine you have a lot of people copy pasting code, writing stuff. I have faced this in today's world too. We have a lot of people copy pasting code from google and in the agile world, you ask the question, ok, can i change this to something else? We have lost? Right. So we have seen that too. So it's only going to amplify that. So we want to put enough guardrails to achieve that.

Then again, eventually, we don't want a skilled train. We want people to be aware of what code they are putting in there. So with that said from a practical application perspective, we have seen a lot of from a content generation perspective, document review and automation.

I talked about contract review earlier. That's a great example. Same thing for the content drafting, uh marketing content. We have actually done projects in creating descriptions for products, right? Short description, long description, um getting all the information around the feedback, the reviews, the market trends, all of that, the brand positioning, all of that combining all of that together to generate the description. So those are examples.

Then i don't even want to talk about the art anymore because we got plenty of examples for that. But idea generation is another interesting thing. We talked about the output, not being accurate or hallucination and all of that stuff or fictional output. Maybe that's where the idea generation comes, right? But you want to be careful what idea you're picking from there, right?

Knowledge assistant again, i talked about this earlier, but we have quite a lot of that software engineering. I kind of covered it so i'm going to skip that. But i want to highlight a few key words here, drafting assistant advisor. These are very important terms in this space because you want to set the expectation with the users that these are not going to automate and do everything for you. These are going to be assistants or advisor for you, right? So that is very important.

This slide actually represent the various different use cases that we have worked on. I want to highlight a couple of them or maybe two more root cause assessment analysis. This is an important thing and we have seen that a lot in the energy and manufacturing space. When they want to analyze the uh the instrument, bring it into the place and disassemble it and look at it and create the root cause assessment analysis report. That is one place where we are able to effectively use, generate tv. Because there are a lot of contents, a lot of them are within the framework that is existing and it is easily doable.

Document d duplication. That's one thing. Very interesting use case, they had a lot of documents and nobody had a clue how many of these are duplicates, right? That's a simple, easy to use, use case.

Knowledge assistant. That's another one like we talked about. It's very easy to implement. It is high time that we if we haven't started building knowledge libraries, we are probably or rather let me say we should start thinking about it as soon as possible. Very easy to build knowledge, libraries with different knowledge bases and manage the access and have the users get the information at the right time. That's also a very good one.

We talked about the marketing content earlier about generating item descriptions, but also the marketing content, right? Hyper personalized information that can be sent to individual users or consumers.

Now, before i wrap up just leaving a thought with you from an enterprise perspective, what are the things you need to think about when you embark on a generative AI journey.

Strategy and architecture, extremely important, right? If you don't pick the right use case on day one and if that fumbles the recovery is going to be long because there are a lot of skepticism there, right? So we want to be careful of picking the right use case and there are good use cases out there that we could use, right?

Architecture, lay the right foundation, make sure that you have the right architecture to do plug and play and use all kinds of different services that are available. And trust me, it's just going to change in the next six months. There are going to be another 25 view different services that are going to come up, right? So you want to make sure you have the right foundation.

The other areas that we talked about assistant customer experience, right? Hyper personalization. These are the topics that you can think about.

Last but not the least the custom generative AI models where you can go beyond what is available, readily available and customize your model, not just fine tune, but maybe go two steps further. And like i told earlier, you know, depending on the object of your model need not be tens of trillions of parameters, it could be a little smaller than that too. So we need to keep that in mind.

Then from an organizational and people perspective, what are the things to consider? Center of excellence? This is a capability that we need to build in our organization for sure whether it is going to be used for this particular use case or to achieve that another use case, irrespective of that, that capability we need to build. So the center of excellence is definitely something that you need to look at.

Lab. Let's set up a lab so that you can experiment. This is something new and is going to change every day for the next one year at least, then comes some other generative AI type something and then we will start with, but you have to be able to experiment before you try and put this out there for the before the end users.

Last but not the least leverage the partners. Like i i work for caps and i, we invested $2 billion to upskill our employees as well as build a lot of content around that. So this kind of partnership is something that you can look for.

So with that, i will conclude my conversation. But if you have questions, please feel free to stop by the booth and we'll be glad to explain more. Since i have a few more minutes left, i want to leave one more thing. There is a decision tree that we have built in terms of whether a particular use case is a good fit for generative area or not. I didn't include it. I thought that we are going to run out of time, but i think i talk too fast. But if you are interested in that, please stop by. I'll be glad to share that. It's a good one that we have seen is working for different type of use cases and it will be great to get your feedback as well.

So with that, thank you and let tom conclude.

Thank you, george. So would you say security efficiency, privacy, privacy and then look at the right places?

Exactly. So with that questions can be asked if you have any or come to the capgemini booth or find me. I'm tom metzler solution, architect, data arrears. Happy to answer any questions.

All right. Thanks everyone. Thank you.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值