Generative AI in medicine: Improving patient & provider experiences

Well, welcome everyone to our panel. Um we're gonna go through some quick introductions. I'm just going to set up some context for what we're going to talk about today.

So today we have leaders from three top pharmaceutical companies and then we have a representative from AWS that's going to give us kind of a cross industry perspective. So you're going to hear about individual companies journeys that they're on in the AI and ML space as well as a cross pharmaceutical industry view as well.

Um so very quickly, I'm Matt Rich, I work for PwC and I lead our pharma and life sciences practice in the technology space. And we have three of our distinguished industry leaders here along with our AWS friend Deepak that are going to introduce themselves.

Hi, my name is Murali Vam. I lead the cloud data analytics and enterprise architecture at Gilead. Gilead is a research based bio pharma located in Foster City California. We are about 18,000 employees, 70 billion market cap. We are focused on virology, oncology and inflammation and MLI, I'd like you and the rest of the panelists to comment on your perspective gen AI overhyped under or real.

I think it is real and overhyped and hype is good.

Awesome. All right, good afternoon. I'm Tim Coleman, uh, chief technology officer for Eli Lilly and company. Uh, we're a medicines company based in Indianapolis, Indiana. And our focus is uniting caring with discovery to create medicines that make life better for patients around the world with a particular focus in diabetes, uh, weight management, uh, oncology, immunology, uh and also neuroscience.

Uh in terms of your question, you know, it's hard to talk about AI without using the word hype. But for me, AI is very real and it's real because some of the outcomes that we're seeing in the early implementations of AI are very, very compelling. I'm super excited because we're just, I would say we haven't even scratched the surface of what's possible with AI in health care and in the life sciences space. So again, I'm very bullish. I think the best is yet to come.

And my name is uh Ken. She, I'm vice president responsible for cloud hosting and productivity services at Bristol Myers Squibb. We are an oncology and immunology company. Um and our focus is getting patients and therapies, uh getting drugs to our patients faster, right, in therapies.

Regarding the comment that my colleagues have made, I think I'm in the same space, it's real at its height. Um we've seen some amazing opportunities. I think one of the more compelling things it's done is give a real life example of what a cloud based solution can be to the to the business population. And we talked about it a lot. We mostly use cloud and infrastructure as a service are hosting. But this is given an actual solution. People can tangibly start to look at and what it's done is opened up possibilities for other solutions. Using the AI, you know AI has become synonymous with MLN, which isn't really fair, but it gives us the opportunity to leverage the new technology and the new awareness of what's going on and then find out the solutions to solve problems more effectively.

Thank you, you the life sciences business for AWS given the company eight years and now I'll go to andy and he said a couple of months ago and we see that um it a hike cycle will be followed by a substance cycle. What I'm seeing more and more is that the hi cycle is going to be followed by a budget cycle and that's gonna make it real, very good point.

So let's start with vision. So I'd love to hear from each of you kind of on what's your north star for? Gen AI like, what are you guys trying to achieve? What are you going after? I mean, we understand the industry and the problems the industry is trying to solve, but I would love to hear individual examples and then Deep if you could bring it home across the different companies you support.

I want to start.

Ok. So before I start about the north star for AI for Gilead, I want to give a little bit of context because all of my answers will be based on the context that I'm going to give now. So about three years ago, we decided that AWS is our strategic cloud source provider and we have only one enterprise data platform for on AWS. We have a single enterprise business catalog and we have an AI platform that consists of data breaks as well as a AWS sage maker ecosystem bedrock. And with that context, you know, we also adopt a data mesh approach to manage data at scale. And we want industry awards to manage data at scale in terms of data mesh approach.

So the north star for AI at G is basically the patient and the patient outcomes. So the vision for our company is to deliver discover, develop and deliver innovative medicines to people living with life diseases. So with that in mind, our aspirations are to bring 10 plus transformative therapies to patients by 2030. So anything and everything we do with AI is to enable the entire pharma value chain from drug discovery development commercialization including back office processes. And we have seen a lot of, we have implemented a lot of use cases. We are working on a knowledge management use case in our research drug discovery space where literally research data, scientists go through thousands of documents for target assessment, right? Just one simple use case. So our approach is to adopt AI across the entire pharma value chain.

Thank you Marli. And I think that's a great point about, you know, the patient is the reason as you think about it, the the things that we're going after the diseases we're trying to solve now are the hardest, the oncology, the things with the brain, the Alzheimer's like those are going to require a more sophisticated approach.

Tim I love to hear your, your thoughts.

Our goal at Lily with AI is to reimagine how team Lily will operate in an AI powered world. And so ultimately, that means we want to bring together the best of breakthrough technology with the best of our scientific expertise and our business expertise to um help speed innovative medicines to our patients. At the end of the day, it's all about our patients and just like Morale, we see this opportunity across the entire value chain.

So we're focused on how do we accelerate drug discovery? How do we speed our clinical trials, particularly patient enrollment, which is kind of the long pole in the tent, how we can improve patient experiences and the experiences of our health care professionals in, in working with us. And then ultimately how we optimize business processes for speed uh and also for efficiency.

Thank you, Tim. Appreciate it, Ken. And um Bristol Myers would be in this perspective and kind of do the same thing. Morali did a separate context, right. So Bristol Myers is facing um a difficult few years coming forward. We have two of the largest loss of exclusivity, drugs. It's our, you know, it's the clip we're about to go off of on the positive side. We've got, we just released nine drugs. We've got another 12, got an exceptionally strong pipeline. So it's how do we navigate that cliff that we're going to go through while we go into the valley to come out of it stronger. And we see AI as being an enabler across that to unleash the information and do the same things that I think old farm is trying to do.

Yes, we want to drive more innovation. We want to get more drugs in our pipeline because we can't just focus on the ones we have. We have to make sure that that pipeline stays strong. And our CEO is committed to that by focusing on research and focusing on commercial. Our releases have to go well, our new drugs have been in the market effectively and we have to make sure we fill that pipeline and we see AI in helping us in all of those and large language models in AI in research will allow us to identify therapies that we might not have seen before.

And if we do see them faster on the commercial side, we want to understand more about um how our patients are dealing with and reacting to our drugs and make them more effective as we go to market and manufacturing. Can we be more efficient in the way we, we make our drugs and reduce the number of um m batches, especially in therapy like car t it's individualized medicine.

And then finally, if i look at it from the clinical side, same type of thing, how do we increase the populations that have been underserved in our clinical trials to make our drugs more effective and understand their impact broadly and get clinical trials out of the market faster.

Thank you, Ken and Deepak. Maybe you could comment on what you're seeing across all of our, all of your clients.

I think every fathomable use case has been discussed in the last few minutes. It is a non beyond that. Ii i think so. Beyond all the use cases and productivity, i think the end of the day, every company and every patient wants new therapies to market. It takes 10 years, $2 million. Hopefully AI makes a dent and all of that from this discovery, all the commercialization. But i do think that for me irrespective of where AI ends up, i think it's the best reason, the best excuse that you have in a couple of decades to go get the money you need to sort out the underlying data.

So you know, to me, the true north star would be that we get data into a place where even if even basic analytics can get done better, i think there's a lot of data science and a lot of data challenges. So it truly don't start would be to get data flow within these large requirements to be actually this is fast, right? That's the the lifeblood of the AI model is the data, right? So that, that's the first challenge to solve.

We've heard some good examples of how the different companies are trying to solve that with mesh approaches and so forth.

Um so Deepak, let's stay with you. So where would you see from all of your customer and client interactions, the one place with the greatest impact right now?

I think everybody kind of started off and i'm gonna say uh sort of enterprise wide use cases, productively use cases. Um and i think that was a new one for a few months, but i think the real game changer is going to be once we get either new models and new models to come out or existing models to be to address the area of bio chemistry. So that is where i really um you know, once we get, once we can get there to a level position in the general models, get you so far. But once you get to that level of specificity where you have a specific model that risks a very niche use case, you could be in molecular or be in helping with better uh trial uh participants. So if you get to that level of use case, i think that would be a game changer. And um the question is how quickly can you get there and what investments are needed both from the industry and from like, you know, the us to get to that point.

So, so starting out kind of enterprise productivity and then down into more scientific and medical functions later, i think there is enough interest in the both the model providers and industry in general to get to that point, i would love to hear from you the one area that you're seeing the greatest impact now, not in the future right now. Where where are you seeing the biggest impact?

Well, i have to say that we are still experimenting with generative AI. All business functions in Gilead are working on generative AI use cases. But i do see where there's a lot of unstructured data where it has to be processed or synthesized, right?

So what i said earlier doing my introduction is the drug discovery space knowledge management use cases, right? So researchers actually go through thousands of documents to uncover hidden relationships between disease target and molecules. So generative AI can help there. And again, to what Deepak said the outcome is ultimately employee productivity, right? And of course, operational efficiencies and scaling innovation, but generative AI is applicable across the pharma value chain. But one thing where we're focused on is developing the knowledge management, use cases as such.

So, knowledge management.

Yeah, that's a great use case. And i i know others on the panel have, have the same, same use case in their companies.

Let's let's kind of switch gears though from like what's, what's real and what's having impact now to how do you measure that? right? because it's one thing to say it's impactful. It's another to actually realize the value.

So Tim, I'd love to hear quantitative or qualitative how you guys have been doing that.

Sure. I mean, we use the same uh you know, traditional efficiency metrics as probably most organizations do. But we pioneered a, i think a concept in pharma a couple of years ago that we call dwes. Uh dwe stands for digital worker equivalent and one dwe equals 2000 hours of human effort. So basically a full year's worth of effort, it's actually helped us better quantify for our senior leaders, what our saves saved actually looks like and people are, you know, very much uh embrace uh ftes and also now dwes, where, where are we going with this?

We're beginning to think about our total workforce as the combination of human workers and digital workers with an emphasis on increasing digital workers over time. This is a strategy of augmentation so not doing more with less, but actually doing more with what we currently have. And the concept is gaining a lot of positive traction with our senior leaders. And i think it's, i think it's here to stay and maybe just one additional comment on how you're measuring success when it comes to the patient.

Because what you talked about was internal and it's fantastic. But anything that you are looking at at the patient side, i think for, for one, it would be uh speed in terms of getting our medicines to patients. And then secondly, it's about having a positive experience, a positive experience on our medications and achieving the right outcomes, but also a positive experience in terms of their relationship and their engagement with the company when they need it.

Awesome, Ken would love to hear from you on, on how you're measuring the value. And i tim might have been very impressed with the, uh with the, the digital worker concept. That's an interesting way of, of framing it. What we've done is is a little bit more we're dementor in that there are lots of trapped dollars all over our organization related to third parties that are helping us to do things

Person 1: A lot of it has to do with what I would say where we see most of the opportunity in AI at the moment is summarization and consolidation, right? We haven't gotten to creation yet, but we're in the summarization where we have opportunities to take manual um efforts. Things like how do you create a clinical trial protocol or how do you create a computer system validation document? A lot of times there's a lot of bodies helping us do that, you know, reading and synthesizing and creating. So if we can use for summarization AI to help us with that and then take out that cost that we don't need anymore, we can focus that money in other. So that's how we're starting to measure and see the efficiencies.

The other thing we're seeing is can we actually utilize the information that's trapped that somebody we used to spend in 24 hours to go find, to respond to a question. Can we accelerate that by feeding that into a large language model and getting 60% of it back out so that they've got a starting point and it's a BMS starting point. It's not a generic starting point.

Person 2: Yeah, I really like the point about, you know, pharma being a super document heavy because of the regulation industry, a lot of documentation to produce, you know, the power of JAI really can help there. That's a great point.

Maybe let's then switch to, to learnings so Deepak maybe would love to hear from you the most common one or two learnings that you hear from your customer as they go down the gen AI journey.

Person 1: I think I sort of go back in three months, uh or maybe in six months. Um there were no shortage of ideas even today. You know, every customer we speak to is 102 100 ideas they all believe are perfect gen AI ideas and then we start double picking on those and you know, it starts narrowing down. So the, so the by time you get from ideas to use cases, there's fewer and then pretty much even fewer POCs and as you think about the ROI point and you try to say, ok, should i move this to the production? Now, we're looking at a few like two or three that are worthwhile because of the ROI question. So I think the biggest learning has been that there isn't an idea that's not like a bit of gene idea. But do you really have the ROI and the data to get to a point of efficacy and accuracy and all that? Um that's been the big learning. Don't make every idea you don't pursue every one of them. Uh I think customers are getting to that point better, a better there.

Um what would be the other thing has been one model? Um and of course, there's a very popular model out there that everybody has pursued it. Now, we're getting to avoid um cost and access and otherwise where customers are really thinking, ok, we've tried that one model. Should we be looking at? Um you know, a variety of models for the use case that we have. I think that's been a mature, that's been a maturity thing. Uh but I think we're getting to that point where both because of the availability of models and customers understanding of what those models can do. Uh there's a lot more coming over like that data are align models really well.

Person 2: First learning is we have data foundation, right? We have a great approach to managing structured data and I talked about data mesh as part of my introduction. So we do a great job in managing structured data at scale, but we haven't done a good job in managing unstructured data. So this knowledge management that we are working on for the research division will work will help us. So that's fast learning that to have a great data foundation, make sure data is discoverable and usable as well.

The second learning is uh while everybody is focused on the shiny implementation of the generative AI use cases, it's very important to figure out the MLOps maps FM MLOps because um we have to know how to incorporate new onto and taxonomies, how to add the 1001st document. Do we need a technical person to do it or a non technical person do it? So we have to figure that out. And I must say that we also have a conversational AI platform in Gilead we are using is so conversational AI existed for so many years. That's the chat bots that use natural language processing that has an overlap with generative AI and so is is working well for us in terms of chat bots and it provides foundational and operational capabilities to manage chat bots. But now how can we intersect generative AI with conversational AI and manage the MLOps is something that we are figuring out. But again, it's a lesson learned over the past eight months, every business unit is implementing generative use cases, but they are focused on it. But how do we take it into production? How do you how do you manage the entire life cycle with focus on cost as well? So that's another huge lesson learn. So how do you combine it with other techniques? And how do you scale to production scale? Scale is a great point.

Person 2: I know that if we were going to pull the whole industry scale would be one of the top challenges, Tim sure if I could jump in and maybe come at it from a little bit of a different perspective. You know, I think hallucination is a common objection. At least I hear um as to why not to use generative AI or AI. And clearly, it's an important risk that we need to mitigate, especially given the industry that we're in. Um by the way, people also hallucinate and uh just like AI creates errors and makes errors, people make errors. The learning is not hallucination with people or, or um machines. The learning is that the quality practices that we all appreciate and I want to emphasize, appreciate those same quality practices are going to continue to be important for the foreseeable future. Ah, in order to achieve the high quality standards that we must, that we must meet. And at the end of the day, I believe, even though we are still going to have to apply those quality practices, we're still going to see significant benefit in terms of cost in terms of speed uh than we did beforehand. So quality.

Person 2: Yeah, I love, love that point. Still gonna need the quality. Yeah. And I think that, you know, the regulator is gonna come along with that story, Ken.

Person 3: A few learnings um take us back to October or actually right around this time last year, we were reinventing all the great news were coming out between this and the end of the year when OpenAI kind of blew everything out of the water and changed our priorities. For next year, we were pursuing a single cloud differentiating model, right? So we were heavily AWS and the other clouds basically had to show that they differentiated enough for us to consider moving workloads. We were heavily compute bound, right? Because all these models, I hate using the term models because now everybody goes to LLMs. But all of the, the GPTs were generally focused on slack capacity, right? x86 you could get it, you could get rid of it. You didn't need it. This all blew everything out of the water because now you had a premium resource, right, GPUs that were very, very difficult to get access to. You had to make sure that you use them effectively. You've got this brand new model where GPUs, you know, Google's offering something and Azure is offering something and Amazon, so now multi has actually become something that we've needed to consider and how do we manage and play with where up until last year wasn't a consideration at all. And I think the other is multimodal is going to be something that we're going to have to live with as well and the ability to move in and out of those as some gets more effective or we realize some hallucinate less or hallucinate more. What's our standard? How do we, how do we measure against it? So it's, I'm a much more dynamic landscape than it was a year ago and I anticipate for a period of time, this will shake out. So its availability of resources is one of our learnings and the, and the scarcity of it. And the fact that there's only one player in the space that's providing what everybody needs. And then the other is the complexity of which models are you going to choose and, and how do you train them or not.

Person 2: Yeah, that, that's a great point, Ken. Um so in about 15 or so minutes, we're going to switch to audience questions. We're just going to kind of double click one more level down and, and dive into some examples in a second. But before that one topic that I'm hearing more and more, um I don't want to say complaints but concerns about is cost. Can you talk about GPU usage that's going to be increasing more and more? It's not the most cost effective, right? It's more cost effective on the cloud than it is other places. But how are you guys starting to manage saying that you're already managing it might be too much. But how are you starting to manage those costs for AI and JAI as they come along?

Person 4: I would say we have to start with a cost aware architecture. A lot of people not familiar with serverless compute as an example, right? So my team provides technology governance for the platform for all of Gilead. As part of the technology governance. We have published architectural patterns that promote, serve as compute as an example to save cost. But also when we start with the design of any AI system involving the right people including fins resources as part of the design process is very important. And as part of our strategy with AWS accounts, we have a good tagging strategy. So we can charge back to the business units. And of course, education and awareness goes a long way. So we do have cloud fluency programs to educate people on how to reduce costs, front time costs. And I would say the last point is we have a strong MLOps, practice MLOps is people process technology coming together to manage the entire machine learning life cycle. We are very mature in that. As part of that, we are able to optimize code written by data scientists to run one third of the time on cloud, much to the chagrin of AWS just but so a strong focus on MLOps is important with the MLOps. We have to figure out how can we run generative AI applications much faster and optimized more efficient, right? And maybe Tim we just hear from you on that.

Person 5: Yeah, I think to start with the reason why we're investing in AI of course is to drive transformation. And as a part of that transformation, of course, we want to achieve speed and we want to reduce costs. So I think the first principle is that the return on investment is is positive. But with that said, we are seeing a significant increases in our usage of cloud. And as a result, our costs are going up, particularly in the drug discovery area. I would agree with uh Morale. We're looking at our solution architecture as one key area. Um I think the other one that maybe hasn't been mentioned is cloud fin ops. And we're partnering with our cloud providers including AWS to figure out how to best maximize our use of cloud but also reduce and manage down the cost. So those will be the two key areas for us.

Person 2: All right. So let's let's get down into some real examples. Um so we talked kind of about like where, where we're seeing some impact what functions, you know, maybe enterprise tech now let's get down to one more level duck. We will start with you. So maybe your favorite across all of the different things. You're seeing your favorite example of a use case that you that is going to be producing value.

Person 1: Thank I will have to say it's probably around knowledge management um are around um submissions to the FDA. I think that's where I think there's, there's a lot of work. So both understanding um previous submissions and, you know, being able to automate a lot of those processes which I think could be very sort of human intensive today and prone to errors. I think that i find to be one that is um that can be controlled. Now, of course, the regulators have to agree to it. But I think that's one area I feel like it's going to get an immediate adoption in terms of, I mean, beyond all the summarization stuff that's happening in the, the, the regular sort of enterprise by the use cases i think that's the one i feel like we'll give it a big lift. Um so if we can get it, get it done right the first few times and get enough confidence, the regulators, I feel like that's the one that's going to give a big lift.

Person 2: Yeah, the submissions process is a massive, massive manual lift. So that's a great example. And that's sometimes the longer pole in the tent depending on what therapeutic area you're looking at. So great example, I can ask you what is, what is one of your favorite use cases and why Person 2: I think there's, there's two. So the first one is following on to what, what Deepak said we put in place a program to look at all of the documents associated with a clinical trial. And one of the first use cases pilots we're doing is we're feeding in the clinical trial protocol document which is 65 pages and exceptionally technical and out of that, you have to generate a patient, a patient form that they will sign and say, I'm willing to understand what I'm getting myself into. And I sign on for this clinical trial. Somebody has to read that whole document and create something that people at varying levels of educational experience can sign. So I've got somebody who's got a college degree, I've got somebody who's got a high school degree. I may have somebody who's had varying levels of education and they need to be able to understand what am I getting myself into? And we found a meaningful transition from being able to consume that document and generate at a varying level of educational background, a document that's useful. And that's one of our first use cases that we've been able. And this goes to the concept of models, right? So we couldn't put that into a generic model because it doesn't have the scientific context. But we were able to find a model that did have the scientific context and that got us farther along.

Um and then the other would be computer system validation where we're looking at same type of thing during manual, you have a validated system and you get system changes because you have to do the upgrade. Well, you have to figure out which ones have to be tested, same things. And we just got to read through it and they got to translate it. We were able to load one of those requirements documents into a LLMs. And literally in a half an hour, it gave all of the information that somebody who read the document came out with. Those are the time saving opportunities and the cost saving opportunities that make this transformational for us.

That's a great though, patient example. The first one like generating a plain language summary that somebody can understand versus the 65 pages of medical and scientific terms like that's, that's a value, Tim.

Person 2: Yeah, we've already talked about drug discovery, so I won't hit that one again.

Um but, you know, i think my favorite uh use case is actually scientific e authoring. So you're going to hear a bit of a, a trend here.

Um i think we all would understand that we're a very content rich and document rich industry. I know we are at lily and it's very labor intensive, it's also very time intensive to put together some of these very complex uh scientific documents.

Uh we've been doing some experimentation uh particularly in our clinical and regulatory documents. Um and the specific use case that we're working on our patient narratives or trial narratives, the the summaries of patient experiences in our trials, we've been able to demonstrate with natural language generation that what would on average take about four hours to create one patient narrative now takes less than a minute, which is super compelling when you just think about the number of patients that are involved in our clinical trials around the world.

So we're very bullish on scientific authoring. Um we think and it's not so much about reducing costs. it's actually about the time savings every day that we save every week that we save in terms of the submission process, you know, ultimately results in getting our medicines to patients faster. So we think that one's, you know, within our purview and we're, you know, looking at multiple use cases across the enterprise for author. And i think that's, that's a great example of when you shorten the time and you know, depending on the company you work for the pipelines are different. But one of your promising therapeutics is in the alzheimer's space, slowing down the progression. If you could slow down, if you could get that to market six months faster, how many people could it help? Slow down the progression of alzheimer's for? Like that's a significant value. So that time savings means a lot.

Me, well, i know they gave great use cases, but i'll focus on a very simple one. Ok. In gilead people open tickets to find out what is the jury duty policy, jury duty. Yes. So answers to simple questions are buried in document somewhere. So we are trying to of course, just like any other enterprise create gilead gpt where employees can ask questions. So that really translates to a lot of employee productivity. So happy people. Absolutely. Yeah, i would actually like to know what that policy is for our company. Thank you for that idea.

So let's let's switch. So we heard a lot of great examples. Let's switch to things that you're concerned about and i'm not talking just technology items. It could be talent governance risk, but maybe, maybe give a couple of examples of the biggest worries, the biggest risks that you're trying to manage. I'll go to ken for that.

So i think there's, there's a fear. Um this is a net new technology capability. And i remember we've all lived through g pr and shims too and, and all the variations on regulations that will come out, you know, and, and it was recently published, i think um the uh u government has come out with it, it, it perspective on, on a i already which is slightly high level but not too concerning yet. The eu has reserved the right to come back with what they think and the unknown is what we worry about. And i think the other then is controlling this as it starts to roll out and the bias associated with it. We all know that the models are biased based on who created them. And understanding that bias is something you find out over time. It's not like they publish it on the front page. So i think understanding what we're getting ourselves into with these models as we go down and the unintended consequence of their use over time is, is some of the stuff we thought about. Some of the things we consider at b ms and also creating a responsible a i policy internally on how these are used. We've, we've all seen the, the poor, who was it? The samsung, i think it was the engineer that loaded stuff up into chad gpt and went hoops, you know, so there's, there's an unknown interaction that we believe people will follow, but people still email stuff that they should. This now makes it out to everybody if it's done the wrong way. So, i think those are the things that we're concerned about and that, that is, that's an excellent one.

Deepak, i'm really curious because you hear probably a lot of concerns from your customers. Maybe the top one or two.

Yeah, i think, i think the biggest one i've heard and i think this is probably close to ken's point. This is such uncharted territory, has to do with it, with an industry that's so fixated on ip and ip protection and data protection. I think, i think the risk about ip attribution. So when you use the model to generate ip of some sort, um you know, is there a risk that 10 years from now somebody comes claiming that that drug was based on my research and i need not only at fusion but also payment and that i don't know if the industry has come through thinking, how do we solve for that yet? Because this, this goes back, what was the model trained on? What data did it have access to? And is that public domain data? Now you create an output from that model? Is that, is that your ip or is it now shared with some, some researcher who published a paper? So i think those risks have, the lawyers are just catching up to that risk. Um and i think for us to have for the for the market to have confidence that the ip be created is protected from those lawsuits in the future. I think there is that risk which needs to be addressed for a larger proliferation. This to happen within the research organization. I think you can get away with questions and gps, but when you get in that realm and this starts to get a little more serious.

Yeah, i think we're going to see a lot of interesting legal battles over ip in the next decade. So that'll be fun to watch.

Um great point, tim, i'm gonna lead you down into one thing i know we've talked about which is a challenge around talent in the space. Maybe you could talk about that because i think it's an important point.

Sure. Um i'll talk a little bit about that one and also another one, you know, i think the challenge from a talent perspective is one about our existing employees and upskilling those employees in the modern technologies, particularly a i, that's not just our tech talent, but it's also uh upskilling our business talent to be able to uh actually do work in a i, we're talking about how do we train uh our business partners, our employees on how to write effective prompts. Prompts are the language of a i. And if our um employees don't know how to write effective prompts, then more than likely they won't have a good experience. So, you know, i think talent hiring good talent training, good talent, upskilling, good talent is a key area. Another one that's somewhat related is, uh, i'm concerned about change management.

Yeah. Um, and how do we inspire our employees to use a i and to maximize our investment in a i so adopting what you're doing? Yeah. Absolutely. And i find it interesting, everyone has an opinion about a i, but the majority of people have never used a i or public a i tools or they've not used them to help them be more effective in their work. So i think all of us have a challenge around change management in terms of how do we get people to use it? One and then two to more most effectively use it.

That's a really interesting point because i think, you know, if we would survey folks, they would say that they've used a i, it's kind of like saying, hey, i don't like brussels sprouts until i try them, right? But i wonder how many actually have. That's a great point.

Marley, let's round it out.

Well, in terms of concerns, i share what deepak said. Right. Ip, ip. If you noticed yesterday, adam talked about indemnification only when he talked about titan, right? Anthropic ceos on stage, he left the stage. They didn't talk about indemnification, but titan models they are offering indemnification. So that's something that we are working with aws. So that's a top concern from a skills perspective. I think tim said it really. Well, same thing with us upscaling existing talent, prompt engineers mops, which requires prompt engineers to work with the different personas. So that's something we are also concerned about and working on.

Awesome. So let's i want to try to leave about 15 minutes for questions. So this will be the last one before we go to questions.

Um how do you approach you're buying decisions? Right? Do you build it? Do you buy it from a vendor? Do you partner? Do you use combination of that can like to hear from you on how you're approaching the implementations?

So right now, um i would say we haven't built our own. We generally use open available models, whether it's in the chat gp t or whether it's something for google or we're not at that level where we're still playing with the models and trying to understand it. So we have it and i will caveat that with and this is the danger when we talk about a i, it's not just about s right. So if i talk to my scientific community, they've been using, they've been doing models for years. We have a huge sagemaker implementation a lot uses models. So the what i would say is from an lm perspective, we have either bought, used open free because there's some really cool free ones out there that you can consume hugging face is a great source or we've used the publicly available models specifically for llm at this time, i'm not sure if we're ready to start writing our own.

Um there's enough really great work out there that can be refined in tune before we find a use case in the research community. I would say they probably had more of a special nature to start doing their own creation.

Tim, we're doing all of the above. You know, i think from a partnering perspective. Um uh we're, we're partnering for capability and also for capacity. Uh we're working with large organizations. Uh we're also working with start ups and we're taking a bit of a different approach with start ups. Uh not just kind of a client uh vendor relationship but uh with investments in equity to kind of benefit from the intellectual property that we're providing some sort of development is helping them be effective, but also helping us achieve benefit as well.

Um i heard a speaker at a different conference that kind of changed my thinking about buy versus build. I think the paradigm is build more or build less because if you're going to develop a solution that's really meaningful to solve a problem. And i think particularly in our industry, you're going to have to build some. Um and so that's kind of the question that we ask ourselves, given a particular use case, do we need to build less or build more? And that helps to guide how we approach it?

Yeah, that's a good point because there's not a lot of buying options. Sometimes really, we also do all of the three, all three by build and partners. So i'll explain that.

Um there are three major patterns, right? With generative a i, the first pattern is consumed. Chad gbd is an example where the foundation models the fine tuning and the data retrieval as well as the application, the entire stack is provided by a provider, sales provider.

The second pattern is extend. So the like anthropic squad model, the foundation model and the fine tuning is provided by anthropic. But then we do rag approach to ingest our own data and then build our own application, that's the extent pattern.

And then there's a build pattern where everything, the entire stack is built custom, right? But gilead is more anchored on the second pattern extent pattern. So we use uh and cloud model more extensively than any other open a i or other models, but we use partners to add capacity to build it on top of it. So we do all three that's pretty sophisticated. So you actually have established these patterns and you apply them to certain situations like that.

Duck. I know you're happy with any of those as long as it's not aws but maybe you can give us run those easy two. Here is the remaining transcript:

No ii i think um building a model from scratch if you want to call it, that is a very expensive affair. We know that um and so i would be very surprised if in the medium to long term, any of our customers actually do that hundreds of millions of dollars to spare, which is probably not that easy. It's just not that the upkeep of the model, right? So you're not going to just build it, you need to need access to the data to train. The model is not easy to come by, especially in like imaging and then the amount of computer capacity to the point we made earlier.

Um and, and then you just got to keep improving the model because you know, it's not done. And do you, do we have the talent in house, do we, can we buy that talent? So it's, it's just not a money thing is that the right thing to do. So i think that in the, in the short, maybe in the short term, there's a lot of academic interest, a lot of science projects that get fun up because somebody wants to show that they're good at building models. But then in the medium to long term, it's probably going to be the model providers who do this for a living and tweaking those models probably where most customers in them.

Yeah, that's a great point. It's a, it's a tough endeavor to do that yourself at this point.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值