Realizing value and business outcomes with AI

Welcome everyone. Today in this session, we'll discuss how we can accelerate AI adoption to drive business outcomes and value realization.

With me is Rija, Chief Data and AI Officer at DXC.

Rija: Yeah, I'm Rija. Nice to meet you. I work at DXC. I'm Head of Business Analytics and Data Science.

Sathish: Yeah, thank you. I really appreciate the opportunity to speak today. I'm the Managing Director of AI Specialists at Amazon Web Services. We work with customers like yourselves, understand the key business problems, work backwards from them and help you build solutions that solve those business problems. And in doing so, we're able to influence the roadmap and strategy of the services in our portfolio. Thank you for joining us.

Rija: AI-driven outcomes have been relevant in business processes for quite some time. These are not incremental, but there are multiple millions of dollars we have seen in returns from these kinds of AI-driven projects. Applying AI insight has been common for quite some time. However, what we have seen is AI has not become mainstream.

I was in Best Buy before joining DXC. There we did adaptive pricing and dynamic pricing with AI for competitive price matches. One of the reasons was to avoid the retail apocalypse back in 2014. So companies have started adopting AI from back then. We have been doing autonomous driving for last 5-6 years where we have done these AI-driven implementations. That's at the center of all autonomous driving.

We have even done R&D projects such as creating an AI avatar of Dr. Peter Scott Morgan, who was the first human cyborg. But the adoption has not happened because 70-80% of the models still do not go to production. The reasons we have heard from our customers are not understanding where in the value chain the insight will generate maximum outcomes, or even having a stable platform that can scale to an enterprise level.

So my question to you, Sathish, is how do you think businesses are going to perceive that this wave of AI can revolutionize the way we use AI in our business processes?

Sathish: I think we are in a specific era. AI has been around for many, many years. The term AI started in the 1950s. We have to give some context of what AI is, because AI can be seen as machines doing tasks that humans can do. It's more or less a concept in computer science. But I think it's changing a lot with the new models and developments recently. Also the perception of businesses is changing a lot. My feeling is business owners now see the potential of this technology, thanks to discussions like what we're having here.

The big step was when we introduced machine learning and deep learning. Instead of algorithms where the programmer defines each step, with learning the models can find patterns and rules by themselves given data. So the process is done by the model, not the programmer.

With generative AI, even the outcome is not predicted. We ask these models something and we don't know the exact answer. In my vision, before AI was associated with machines doing rational tasks like calculations and engineering, not artistic stuff. Now we are in a phase where we found out machines can create a song, a poem, text. This changes the perception of the potential of the technology and the use cases. There are a lot more use cases coming with these new technologies.

I think business owners now are starting to realize this, thanks to the popularity of models in general. This year, many models and technologies were introduced. Now AI, machine learning and generative AI are commonly understood topics. All tech sessions touch upon AI and machine learning. So business owners now likely realize the importance of this and there has been positive marketing. With good technology, you need communication to make it popular and known.

Generative AI has pushed knowledge for everyone, business owners included. Of course we can do more, but it's a different stage because now we understand machines can produce artistic things too. It's a different vision of machine potential, let's say.

Rija: You're right, generative AI has pushed the boundaries. I'm seeing tremendous momentum to operationalize and industrialize these solutions, which will hopefully bring down the price point and make it more simple. Previously only large organizations and tech giants were able to harness AI profitability. But now I think there will be a bigger push for adoption across different business formats.

What do you think?

Sathish: I have the privilege of working with customers across all industries. This is what we've seen from the Amazon side - AI adoption has been bimodal in distribution. What do I mean by that? There are leaders who have thousands of models in production. How many here have over 100 models in production? How many have over 1000 models in production? How many have less than 10 models in production?

You can see there's a bimodal distribution - advanced teams with thousands of models in production, and others only doing experimentation with 5-10 models.

To answer your question, I think there are 4-5 major barriers preventing customers from becoming leaders:

  1. Cost and ROI - many don't understand the cost and return on investment.

  2. Skill set - over 50% in our surveys say they lack the skills to adopt and manage models at scale.

  3. Complexity of tools - this has come a long way, with hundreds of thousands now using models in production, but historically a barrier.

  4. Regulatory concerns about explainability and responsible AI in regulated industries.

  5. Data readiness - may not have structured/unstructured data or know what insights to automate.

However, in the last 6 months, we've seen significant opportunity to use foundational and large language models to solve productivity, cost reduction, and automation use cases with both generative and predictive AI. This can help with customer experience, personalization, document summarization, call centers, and finding insights in data. Many things possible today that weren't before.

I encourage leaders here to identify use cases that can have meaningful productivity gains you can put into production now. Amazon and DXC as partners are here to help achieve that vision.

Rija: Right, it's not just about democratizing AI models, but democratizing value creation. With collaborative efficiency across an organization, it can drive productivity - maybe we'll look at a 3 day weekend sometime in the future!

OK, so you talked about a survey. We also did a survey of Fortune 200 companies on impediments to absorbing AI culture. 9% of IT and business leaders said technology was the problem. 91% said it was a people, process and culture problem.

I've seen organizations take different measures - some make an aggressive AI strategy, some create crisp governance and operating models to realize AI value. But it varies based on how aggressive they want to be.

So the question to you, Sathish - what do you think would be the ultimate catalyst for AI adoption?

Sathish: Great question. I hope you can validate whether what I'm about to recommend has legs.

I talked about the barriers - cost/ROI, skills, complexity, regulatory, data. But there are 6-7 things I've seen where customers successfully adopted generative or predictive AI at scale:

  1. They understand the use case for their industry/business, the data needed, and how to measure value. Without clarity, it's just experimentation.

  2. Leverage resources - if you lack skills, partners like Amazon and DXC can help identify use cases and a roadmap in ideation sessions.

  3. Training for your teams - enablement and training on the latest developments is key.

  4. Start with easier productivity gain use cases first, then more complex longer term. Build a roadmap.

  5. Measure value and have clear metrics - what does success look like?

  6. Governance - as you scale, managing models in production and monitoring for accuracy, bias, drift.

  7. Culture - executive sponsorship and alignment between business and IT.

Those are some of the best practices I've seen work when taking models to production at scale. Please let me know if those resonate based on your experience.

So people have skills, whether you want people leverage an existing organization to do prompt engineering, whether you want to take data science teams and give them, you know, ability or tools to really assess business value of use cases, training and ability to build that talent within the company is super important.

I think right now, we talk a lot about foundational models and a p driven models. I think the world is all about applications, the applications that are being powered by the model and how do you use those applications to solve business problems? So it's important to understand how you go integrate those tools into your application development framework and into your cd pipelines.

Last but not least is leverage your partner network. Like if you, if you have the talent or you may or may not understand which use cases are relevant for your industry, leverage your partners leverage amazon. You feel free to reach out to me after the session, we'll put you in touch with the right talent and skill set that'll give you the ability to then decide what are the use cases that are meaningful for your industry. What are the use cases that are horizontal that apply to the functional teams in your organization? And how do you basically take them to production? So those are the things that i've seen successful companies do good that you mentioned because see what i do believe that the classical models, the supervisor and supervised as well as the generative model will coexist. But if they can augment each other, we get a tremendous momentum forward.

So i'll give an example of what we did since you talked about it. It reminded me in one of the top three oems, we were already doing a computer model for cars crash detection. So there were manual processes that were going on that could get only 72% accuracy. But as you all probably know that if it goes through the dealership and has to come to the whole life cycle, it's a lot of expense for the auto manufacturers.

So we switched to automated detection. However, we could not go beyond 84% accuracy because there were not enough images of those like this is a curve in the car and the scratches on the curve or the curve. And there is a scratch where the production light and the factory are reflecting on the particular surface, right.

So we couldn't get those images. And we use multimodal generative a through sage maker, stable diffusion models to create those net new images. And then we fed it back to the model training and we could get 4 to 5% point increase in the court of the cut, the conventional model, what they call it now, right?

So i would like to ask you the same question i agree with satish. So the idea is that everything starts to acknowledge and also we perceive this in the company. So we started organizing session session open to all the employees regarding data, the importance of data, the value of data and also data visualization, a little bit of machinery for all the employees. And it was a really big success. So it was very followed.

And also we do uh mlmb for business program. So we want to collect the use case, as you said. So it's important to have a, a way to collect the use case and to show to the business owners uh how they can do or what they can do with the, with the machine learning with a i.

So we started this program. So we organized different sessions with all the business owners within the company and together with the a aws, we were able to tell them on a business point of view, what's the benefit of applying machine learning a i in within the company? So we were able to collect a really, really a a huge number of use cases that were after evaluated and some of them became very real project.

So let say so for me, learning is very key. We are also organizing a lot of training specifically for technical people, data scientists uh within the company. And also what i i found is that we are the skills within the company, but they are spread all over the company.

So we are not a big, super big company. We are 5, 5000 now, but still. So we are covering three different big business. So one is sports car, one is formula one racing and one is lifestyle and so people are covering different bus businesses. And the idea is they create bridges between the employees.

So what we we have done, it's what we are called data science hub, which is a way to put together employees, ferrari employees together all the people that are data scientists that are data analysts or people that also like uh machine learning lovers or whatever, they can join this session that we do regularly.

So we put together the people, they describe their projects. And also this creates a lot of internal collaboration, let's say, which is very, very, very useful. So the idea is that we want also to centralize the architecture, the governance of these use cases. Otherwise it would be really complicated to manage the different businesses. All of them rely on different suppliers and creating different use cases with different technologies. That's let's say.

So what we have seen in that that approach was very, very, very successful, very nice. See the thing is one thing i cautious, the customers i talked to is don't try to force fa i, right? It has to be organically understood the value unless people understand and realize it will not be in their muscle memory to come in every single scenario you look into. Ok, i can probably drive some productivity and efficiency and optimization here.

So my next question to you more is le um organizations have adopted a i in some form or the other but and they have also a i strategies, right? And they are in different evolution phases probably. But what do you think this new wave of a i is going to kind of push you all towards augmenting your current a i strategy.

Yeah, i see like differences depending on the business. So as we said, there are some um behavior which are prudent on this because also the technology is changing a lot. So it's not always probably a good option to invest a lot on one single uh technology on one single model because every day there are models coming out and so on.

So, so we are using aws and we are using change maker and also we are using bedrock for the general tv i. And what we like as is this idea of having an open uh possibility to select the model that you want and also to change model, keeping the application as, as it is, let's say.

So this approach is very, very is working very well in ferrari regarding the business and the status. Uh so we are using a lot of tech technology. So ferrari is based on innovation and technology and um so that's key for sure. But also we rely a lot on people.

So the point is put together technology and people. So in so ferrari, our founder, he was saying that the companies are made by buildings are made by machines and by people. But in particular ferrari above all is made by people.

So what we are interested is to keep human, the center of the experience. And so we, so we produce cars but we produce more driving experience, let's say so of course, we, we are not a commodity. So we don't want to provide a service that's going from 0.1 to 0.2 but we want to provide driving experiences and fun.

And so you all the time the user, the the customer or the owner is at the center of the the experience. So our idea is that a i and machinery will never substitute the the person but uh they will cope. So machines and humans needs, needs need to work together. And that's very relevant for, for ferrari.

So we, for example, in the, in the, in the offer, we will not go for autonomous driving. So we're not working on that uh that part because we think that the engagement is the the fun to drive and our customer, they like ferrari because they want to drive.

So uh but still we use a lot of tech technology supporting the driver. So we we are investing a lot of machine learning and a i supporting the driving experience. But all the time the human is at the center of the of the experience. So that's very key in ferrari.

So we we invest a lot on technology but uh always uh we care much more about people, let's say. And i was also listening to the keynote on wednesday. So the the chief data and a ie officer and i really like the idea of technology, human and data altogether. Like look at the future and we can, we can combine those of them because sometimes we look only to technology or only to machines like forgetting about people and we will end up on robot society. That's not what we would like.

Yeah, exactly. So and i have seen two more trends, i'll just mention, see the first one is there are companies who are creating an office of value realization. I mean, they can be called different. But their main function is to contextualize the applicability of a i in a, in a value chain. And they do the value stream mapping to see where in the organization, i mean in the whole end to end value chain, there are legal implications, there are compliance issues.

So what happens is ultimately you are able to avoid the behavioral vulnerabilities ultimately. And there is a second one which is similar to what you said, but a little bit different is people are enhancing the operating model and the transformation office is augmented by 1/4 pillar.

So instead of considering only people process and technology add decision intelligence powered by a i which will kind of help decision making. I mean like intelligent decision making for the three pillars. And then we can use the rigor of the transformation office to drive a very clear transparent communication on a i strategy, then break the organizational silos to bring multidisciplinary team together to work on these a i value realization.

And it really creates a playbook right across your organization that you how you seamlessly navigate because there are a different line of business, there are different portfolios, right?

So the next question to you. So this is so there will be thousands and thousands of people, right? Kind of working on these models and there'll be maybe thousands of models as well pretty soon, right? So how is aws thinking of strengthening the base with the services that they are bringing?

So hopefully many of you attended adam sippy our co session. And so i'm going to use his terminology and nomenclature to kind of describe our strategy. So we think of the machine learning stack in three layers, right?

Yeah, the core infrastructure layer which most of us here are sitting in this room, think it's boring stuff, but the reality is you cannot ignore the infrastructure, whether you think about gpus, whether you think about storage, whether you think about your data lakes fundamentally because you know, with the wave of generative a i a lot of the customers who have tried to take workloads for production have realized that the cost of influencing at scale can be an innovator.

So ignoring the implications of picking the right compute instances, ignoring the implications of what's the right size model in terms of billions of parameters really has a lot of downstream effects that need to be thought through and planned for up front.

So the infrastructure layer is where we will continue to invest whether through a video partnership, whether it's through our custom silicon, because we want to, we believe that for you to scale those machine learning workloads in production, you need the optimal infrastructure so you can get the right price performance balance.

The middle layer, which is where we are solving a very large problem of not having high and complex tools. Think about those of you who are data science here and passionate about machine learning like I am, if you go back even a year or two years, look at the number of steps we needed to follow to get a model into production, right? You need to have the data, you need to prepare the data, you need to pick your features, you need to build and train your model, then you need to be able to deploy the model in production. You need to have MLOps capabilities to be able to maintain that in production. You need to have responsibility. If there's drift explainability, et cetera, that's a complex process.

Now we think that we can abstract and that's why Bedrock was born as a service because we wanted to fully service a service that abstracts the complexity of building a model from scratch and accelerates your time to really building and deploying that model in an application or in production.

So we think about the middle layer as something that gives customers the optionality, you can decide whether you want to build your model from scratch. You can use SageMaker whether you want to jump start your development by using open source third party models. You can use SageMaker JumpStart, whether you just want to consume models, curated models through a PaaS Bedrock. We give customers the option lead depending on where they are in the maturity of building and training model.

And then at the topmost layer is the application layer for certain applications where we see really significant productivity gain. Think Amazon CodeWhisperer which is a coding coding companion. Those are applications you think about our business intelligence tool with QuickSight Q, these are all services that we want to make available to our customers where they don't have to worry about building an application. They're taking a model that has been pre integrated with the application to give you the benefit and the advantage that you're seeking.

So those are the three layers of how we want to innovate. We'll continue to innovate across those three layers and that is kind of central to our strategy.

Okay. Awesome. So I'll show so we have been working on autonomous driving for quite some time and that works on a mission critical AI application running at the edge, right. So what we have created as a framework has been working and been hardened in production for like last 3 to 4 years where we have seen a collaborative workbench is kind of very essential. And what we are doing is we are taking these frameworks and accelerators and augmenting for what is required for this new age of AI, right.

So what you see in collaborative workbenches ontology, so this is kind of a concept that we are we have plugged in where you have a business overlay of, you have a business process and that overlays your data topology underneath that way what happens is you are able to exactly figure out where in the business process you are embedding this inside and where the returns are coming from data quality extremely important. And this is a problem I think for issues and nobody kind of takes a ability to solve it. But then having the true traceability of where exactly that there is a data quality issue is very important in this all these AI models.

I'll just touch upon the other one verification and validation framework. So if you all know in AI there is an ISO 126262 standard, which kind of dictates that every single model version that is going in has to be saved with the version of the model, the data, data version as well as the tool, it was tested with the version of the tool it was tested with. So we have a framework which actually kind of complies with that. And with generative AI, we will get to a state where we need to save the version because they'll be so fast, we'll be moving, we need to still go back and see what was the recommendation and why was it? Right. So those are the frameworks on that particular first building block that we have collaboration workbench.

The next one that you see in there is sustainable edge embedding. See it's one thing about like scab embedding the insights, but then how do you continuously monitor this? We have been monitoring this for like perception models in the car, right? And that can't just go wrong, right. So the skews and the explainability of the model bias direction and even if you look into reproducibility, it's so important. Those functionalities are there in the base framework and we are expanding it for uh generative AI and others that are coming up.

The third building block is software development platform. See most of our customers are at multi cloud or at least in the hybrid environment where they're running models on prem in cloud or multiple clouds. So it is cost prohibitive to have this implementation again and again in each of those hosting environments, right? So we created that way of how do we exchange model images across the cloud and move the compute the data because otherwise will be moving only petabytes of data back and forth, right. So that's the framework.

And the next one is multi agency. So this one we created again for autonomous driving. But what it was geared to do is if you look into how they collaborate on models, like the OEMs, collaborate on models with their tier one suppliers, they have to work in the same environment. The tier one suppliers needs to get their product and test it with the data. But there has to be still segregation of what data they see and what kind of model they see, which version of the model they see. So that kind of kind of resiliency has been built on the platform.

But ultimately, this was the end to end that we had built and we are augmenting that. But keeping in mind, the most important thing is interoperability across all systems, all the data and AI products that we have. Otherwise it's just not scalable, right? And so to to that impact, right?

Um what we uh what we have also kind of thought about is that every time there is a technology shift, there is a I would say anxiety and there is excitement as well. But if you look into AI, it's a tectonic shift, right? And we went through the same thing for cloud, right? It went through the same bell curve of anxiety. But if you look at us now, cloud modernization is stable, take discussion.

So my question to you more would be how do you perceive the world to kind of embrace this? And where do you think the world will be in one year? I mean, I'll not ask two years because that's probably way down the line. So first, I want to want to say that regarding this architecture, yeah, we don't do autonomous, but I find some similarities on what one does and also the work that we are doing for data.

So as you said, as much as you put the data together and you organize the data and after you can do a lot of stuff with machine learning and AI, we are working a lot on the data layer, let's say, and we have extended that for non autonomous driving space as well. Yes. So I think it's key to have good quality data to have also good quality machinery and models.

Regarding the future, it's very difficult always to do predictions. Uh because we know that the world is changing so fast that uh probably if I look to this video in one year, there will be completely different. So that's also my personal, personal uh idea. Of course, it's not uh i'm not representing Ferrari because we don't do, we don't do predictions. But uh what i see that uh at the moment, there are a lot of start ups. So there, there is still a lot of a hype on machine and AI. So for sure there is potential.

So we have said that there are a lot of potentials in the technology. There are also uh probably the the need to consolidate some players more so my vision that there will be some place that will cover a lot of uh features uh and probably they will be selected by the the users by the customers. And so probably this will reduce the number of start ups that will be present focusing on big major, let's say, player that and big big applications, let's say.

Um but still, i think uh it will be like a lot of rumors on on that and also the the extension of the use cases. So what i see now that there is a lot that can be done between robotics and general tb I, so i probably we are just at the beginning. So i've seen a lot of cool stuff that is that some suppliers are doing as well. Uh but i, i still think that it's only a little part because still we have to cross the knowledge much more to collaborate also in the research of different fields, let's say. And for me in the, in the next month, a lot of will come also from robotics and AI together.

And uh the next uh the next uh things i wanted to say is about uh mobile uh models probably. So now the tendency is like before, it was to have the biggest model possible that one model can do everything, but we have seen that it's not the case. So probably it's better also for sustainability to have like smaller models uh specifically trained on your data for a specific task. And so probably what will be the the next source is to have small models such small that they can run on a, on a mobile on the edge. And you can turn, you can train the model, you can fine tune them out. Every time you receive new information on your phone, i will be updating on your information. You can interact with this general or more conversational AI that knows what you have done all your information.

So i think the mother will be reduced in a way that can fit us to our mobile phones. And you're talking about technology and all, i'm even worried about the social implication. I i'm thinking whether i need to talk to my daughter about AI because they'll probably have it in their middle school curriculum, right? But anyways, how about, how about you?

I want to answer that question, but I want to begin by reinforcing two points that each of you have made, which are super important for this audience as you leave this particular session.

First, this building block democratization value chain platform that he talked about. What I have found is the observation when I look at customers who are very mature in the development of machine learning models have this platform that they invested in over the last couple of years that they have standardized on. This is what the entire IT team actually uses to build train and deploy models in production. Super important.

The second point that you made, which was around not one model, not only large models will win. We see that all the time when you think about purpose built models that are much smaller. But what do you mean by smaller 10 billion parameters? 15 billion parameters. Sometimes I can give you much better price performance and better accuracy compared to the hundreds of billions of parameter model. So don't think that only one model is going to rule the world. It's not the one ring that rules them all. It's going to be a combination there off which is your point.

Now as we were coming into the session. Soja he said, so as we, you know, close our session, we want to talk a little bit about how we see the future. So she asked me to stare into my crystal ball. And so I'm going to share with you what I see and maybe in a year from now when we all get together at re invent, you can come and tell me whether I was accurate or not. So let's here we go.

Um i definitely think that there are eight things that i see happen in the next 1224 months that will have a meaningful impact on this landscape and machine learning landscape.

Number one, i will see, i believe we will see cross industry adoption of machine learning models. If i look at the last 57 years, there have been certain industries or you know, car development, definitely auto industry has been a leader when i think about health care, life sciences, when i think about financial services, they have been a leader, but there have been laggard industries as well. I see the generative a i will democratize that and more industries will start to catch up and deploy more models in production. Unlike the past, that's number one,

number two, you brought it up previously, siri multimodal models. A lot of the use cases that you see today are text to text, text to image multimodal models and single models being able to handle multimodal use cases. Definitely we will see a lot more adoption.

Third, you mentioned it edge computing, it's there on this slide is sustainable, edge edge embedding. Definitely in the auto manufacturing facility. Super important. I think when you think about safety quality enhancement, when you think about predictive maintenance, super important use cases that i've seen in production in auto industries, i see usage more and more usage and accelerated integration of models into application development.

So you know in the past, it's mostly been about social media applications but i see a lot of it applications, whether it's a manufacturing flaw, whether it's our own productivity environment, those applications adopting models so it can improve our day to day productivity.

I see regulatory and government governance oversight becoming more important. Look at the a i act or what happened in emir, look at the white house, you know, executive order that recently biden introduced all of these things are evolving, regulatory and compliance requirements that we need to watch out for.

I see a lot of silicon innovation. If you notice there was a lot of talk for the nvidia, there's a lot of talk of our own silicon innovation, you know, cost of computing is meaningful today, whether you're training your own model or whether you're influencing a model, a lot of innovation, i think will continue to happen on the silicon side, on the infrastructure side because that's really gonna is needed to really scale a i applications responsible a i, we talked about, you know, the importance of responsible a i model explainability.

We talked about model predictability, we talked about model drift. All these things are important and that's why we introduced, you know, as amazon guard rails as a way to help customers manage the ability to deploy models in a responsible way.

Last but not least you mentioned your daughters. I think if the biggest roadblock today in our customer base is lack of talent, you know, look at what happened in the last 10 years. What degree was most of the young people who are coming into the workforce today? What degree were they pursuing computer science because they felt software jobs in the tech sector for sure it was what their future was going to be dictated by.

I think that is about to change. Don't get me wrong. Software jobs are super important. But a lot of children now who are entering school and are thinking about college and thinking about the workforce, want to get a data science degree, want to get a a i degree, wanna get a machine learning knowledge.

So i see a shift in the workforce in being able to build talent at the school at the grassroots level who can talk about machine learning and data science and a lot of people accelerating building that talent pool within companies and organizations.

That's my crystal ball and i'll just add one at the at the end. So what i feel is like when human and the machine brain starts collaborating, like augmenting each other in day in a life and that becomes a new normal. We are successful, we have been successful.

Ok. So the last question, if you get started on that, what are the roadblocks do you think? Or what are the watch outs? Do you think that we should be careful about?

Yeah, for me, one watch out you were mentioned. So it's taking care of human in this, in this process. So um as we said, so machines and humans should cope mores who work together and in ferrari. So we care a lot about uh people and the experience of of drive.

So i think also when you think about a solution, a machine learning solution. At the end, you have to think about the the person that is uh using the customer that is using the solution. And also like now that there are a lot of uh push on auto on automatic stuff. Of course, the value of human is very relevant.

So what we feel, so we are working the luxury space which is peculiar, very different to, to other spaces because the the the customer needs the human contact. So for example, we are working with dealers and the dealers are the point of content for our customers. And we will will, we don't want to replace them for with the chat bots or like to, to a i.

So the idea is that the technology and also the generative a i can help dealers uh increasing the knowledge that they have about the customers, the way that they can propose the, the the product, also the experience of the configuration and so on. But we will never forg forget about the, the role of the human, which is key because they, they want the experience and even if you have the robot which is acting like a human, which is saying the same stuff, it's totally different.

So the idea is that in other probably markets, if you want to buy a sandwich and you have your screen, you don't want to interact much with people. If you are, when young person, you just like you are interested in the customization, you can have your sandwich, it's fine, you pay digitally and that's it. Yeah, for luxury, what the experience is, the human is very, very relevant.

So you want to speak with people, you want to be cared by people. And um i think it's very relevant to take this into consideration also because in the future we were mentioning about government and uh acts and so on. So for sure there would be um probably a long process. So i don't know if it's short or long, but there will be the need to, to define what is machine learning generated or a i generated and what is real because everybody needs to know if i receive a call if i'm speaking with a robot or if i speak with a real person.

So for sure, i don't know how much time will take because we know that some regulations usually are longer and not as fast as the technology. But uh we will reach the point where we'll have the label in every piece of content that's generated or not. And so this changes a lot, the experience of the, the customers.

So it's different if you have technologies that are helping the the sellers, for example, the dealerships, the dealers or if you directly use the chatbot or the conversation, conversational a i. So for me, it's very relevant to understand the relationship between people and uh and machines.

The other point was about architecture and governance. So uh at the moment, for example, we are working a lot on rag use cases, retrieve for like exact information from texts. And what is very important that we were able to create a safer we are using of course the cloud but we have a safe environment where all the users, business users, they can put the documents they want and they they can interact with the chatbot asking staff doing summation, whatever but everything is in within the cloud, in a private uh in in the in the cloud of the company following all the security rules. And so uh privacy rules and so on.

So the idea is that uh for me, it's very important to centralize the architecture and the governance of this a i use cases. Otherwise you will be dealing with the security problems because all the people, all the business uh owners within the company, they will try services uh with online services, uh start ups without uh area control and governance of this.

So we are working really a lot on the governance for the machine learning use cases which is i think it's very strategic to, to go on a very fast pace.

True, true, true. Ok. How about you?

S yeah, maybe i'll make it quick so we can make it interactive and get some questions from the audience here. I think there are a couple of key considerations that i have seen that are important. One is having a clear strategy, goals and single thread ownership within the organization. So you mentioned three things, people, technology processes and to me organizational structure that gives you the ability to leverage data science teams, analyst teams, it teams super more what you're trying to achieve through the adoption of a i needs to be very clear and upfront.

Second is i talked about the organizational structure. What i've found common in successful teams that have deployed air models in production is the hub and spoke model, a centralized platform team that is responsible for making sure they have a common set of tools, governance, security practices, and then embedded teams within lines of business who can. And you mentioned this at ferrari that you have a similar kind of model that can identify the right use cases and then make sure that they go to production.

So it's not just about experimentation. You mentioned the bonds of data, which is super important. You don't know what to automate if you do if you already can't have insights into your data. So that's super important having the right tools, the integration of those tools into your application development, ecosystem, super important. It's not just about the model, it's not just about the infrastructure, it's not just about the use case and the business outcome you're trying to drive it's how the entire it stack is integrated and how you're managing that across your entire application flow talent.

And then knowledge and training, making sure people are aware and abreast of the latest developments and then it becomes a continuous circulus cycle. So that's, that's what we see.

So you all covered most and i had just few, you talked about objective metric and qualitative and quantitative outcome, right? What i have seen is let's not restrict that to the stakeholders only let's percolate that to every layer of the organization, right? Because otherwise the portfolios don't know what we are driving towards the objective metric needs to be set and work backwards, but then go across every level.

The second one i call it mind the gap is because see though a i initiatives are a big important priority for a data and analytics team. However, that might not be the true for the rest of the portfolio. But if you have an insight, it's about to get integrated to an application, you need the integration team, you need the middleware team, you need the application team. And if their priority is not set up and there is no enterprise alignment, there's gonna be a problem, right?

And the last one, i think that's probably the one of the most most important ones that i consider is speed. See what happens is most of these a i initiatives are developed for a competitive advantage in the market. But when the a i project starts, people start getting very focused on accuracy of the models, 80% 85% 90% it becomes like an r and d cycle, right?

So what i generally have seen and would recommend like have an authoritative decision maker who can say that the model accuracy is right enough at this point of time deploy and then iteratively improve the model, right? Because otherwise what happens the opportunity is lost? And the business leader doesn't find, what's the importance of me spending on this a i initiative?

You just reinforced an amazon leadership principle. It's called invent and simplify. Build the model. Don't look for 100% accuracy, deploy it, you learn from it and then you make it better.

Ok. That's why we are partners.

Ok? Now let's open up to the audience. Any questions?

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值