Enhance your document workflows with generative AI

"Document processing has been a traditionally time consuming, expensive and extremely error prone process. Today, I'm here to talk to you about how you can use AI and specifically generative AI to enhance your document processing workflows.

Hi, my name is Navneet and I'm a product manager on the Amazon Textract team. Today, I'm also going to be joined by Ruchi, the director of machine learning at Sent, and later she's going to talk to you about how Sent has actually used AI to revolutionize their document processing workflows. Let's get started.

So why is document processing so tedious? To understand this, we need to understand how customers have traditionally been doing document processing. And we found three themes or ways in which customers deal with document processing:

  1. Large swaths of customers are still dependent on manual processes which are highly labor intensive. And because of the human element tend to be error prone, expensive and tedious.

  2. Customers already use OCR and other such legacy technologies to extract raw text from documents because these have been around for decades. But there are certain limitations with this technology, it only extracts the raw text from a document stripping it away from all elements of structure, which means that you can't really derive insights from the text or the document and use it for downstream decision making very easily.

  3. Thirdly, certain customers have combined OCR and such technologies with custom post processing code which is rules based to derive insights from the document. But the rules based nature of this post processing code makes it non scalable as documents come in a variety of structures and layouts and it is almost impossible to write rules based post processing code to cover all these variations.

What does this mean? It means that even processing a simple loan application or a claims package can take several hours to days and no one really has the time for that. It also translates to millions of dollars of operational costs spent purely for document processing.

Ok, so where were we? Like we said, document processing is tedious and it can cost millions of dollars. This is a problem that's not specific to one given industry, from financial services to healthcare and life sciences to public sector, documents are the lifeblood of each of these industries and go to their business processes. And what we have heard is customers across industries face these common set of challenges.

Enter Amazon Textract. Amazon Textract is a machine learning service that helps you extract printed text, structure, data and handwriting from virtually any document. How does this address those customer challenges that we just spoke about?

Firstly, Textract goes beyond simple OCR and actually extracts structured information from documents like tabular information or key value pairs so that you can actually utilize them for direct insights. It does this using a combination of computer vision, NLP and other ML techniques, but it's completely under the hood such that you as a customer do not need to have expertise in this or any experience using these techniques. You simply need to integrate with one of the multiple globally available, fully managed APIs that Textract offers and what that translates to is lower costs and reduced manual effort.

With that background, I'd like to talk about a few of the core capabilities that are offered by Amazon Textract as well as some of the other AI services that AWS offers.

This slide has a bunch of features and capabilities, but I'd like to focus on three or four of them. The first specific one, like I said, Textract is all about going beyond OCR and there are specific features that help you extract structural information.

The first one is our Tables feature. Most documents typically contain information in a tabular format, think financial documents or any lab reports that you would have seen. There's a lot of tabular information and recognizing that the structure is a table and extracting it is what our Tables feature does.

Forms is a feature that automatically recognizes information structured as key value pairs. So if you look at your ID document or almost any literally any form that is there, there's typically a key like a customer name and then the value which is what you actually fill in and recognizing that this is a key value pair and extracting it is what a Forms feature does.

Recently last month, we launched a new feature called Layout Detection. Layout detection is a feature that helps extract structural elements like headers, titles, photos, which actually help you analyze the document in more detail and utilize that information for downstream processing.

The other feature that I'd like to focus on is our Queries feature which was launched last year. This is actually a customer favorite because Queries provides you the ability to ask natural language questions and extract exactly the information you want in the format that you need. So you supply the context in terms of the information you need and can almost chat with a document and ask specific questions.

We've gone a bit further with Queries this year. And as of last month, we launched a new feature called Custom Queries, which helps you as the customer customize the output of the Queries feature using as few as five samples of your document in a completely self service manner.

At the end of this lecture, there is a slide with a few links that can help you actually go through these features in detail and get started as quickly.

Here are some of the some of the customers that are public references of Textract. These are customers who are already using Textract to automate document processing workflows. It's not an exhaustive list, it's just representative and what could fit in a slide literally. But the key point I'd like to leave here is you could see the diversity of industries that these customers come from. So the product and the service is not industry specific but can cater to the challenges across different industries.

Now, let's switch gears a bit. We've spoken about the challenges in document processing. We've spoken about existing AI services, how they address these challenges, but generative AI is transforming literally every field. And before we dive into how generative AI and AI services work together, let's talk about what makes generative AI so interesting at a high level.

There are three or four features that customers are really excited about when it comes to generative AI:

  1. It is pre-trained on a vast amount of unstructured data which makes it capable of solving problems from different contexts.

  2. Just the size of these models and the size of the parameters makes them capable of automating more complex tasks than traditional AI models were capable of.

  3. Because of this context of the vast amount of data that it's been trained on and the size of these models, they generalize better and can actually now help process a long tail of use cases and documents that were not possible previously.

  4. Finally, customers are really excited about the customization capabilities that these large models provide with very little data, you're able to take this large model and customize it to your specific use case and achieve high accuracies on it.

Ok, so how does generative AI fit in with document processing? What we found is that generative AI is augmentative and enhancing in nature when it comes to AI capabilities. So customers have traditionally already been on the path of adopting AI to automate documents as an example, they are already using it to extract form field values, table structure elements. They're using it to chat with the document by asking specific natural language queries to extract information. They use the confidence scores that these models provide to actually optimize human in the loop processes. And they even go as far as classifying documents and finding specific business entities out of their documents.

But what generative AI is doing is it's augmenting this and adding a layer on top that is unlocking new use cases for customers. So rather than just stop at extracting specific pieces of information, you're now able to use these large models to normalize and transform the information to actually match the downstream databases or system that you're trying to store this information or utilize it.

Instead of just asking very specific extractive questions through QA, you are now able to do more complex tasks such as asking the models to summarize the documents in the form of a short paragraph for a few lines or perform complex reasoning tasks on top of the data.

Instead of just using confidence scores to optimize your human in the loop processes, you are able to add smart validations such as automatically checking whether a document is expired or automatically checking whether certain fields match a custom business format. Like an SSN being an eight digit format is something that should not be allowed.

And lastly, the generalizability of these gen AI models actually allows you to process those very business specific documents which are part of your long tail that you have not been able to onboard yet. Because the existing models just could not generalize and extract information from those.

For those who like a bit more detail, this is a sample architecture that actually shows how you can get started as soon as today in utilizing AWS AI services as well as some of the large language models that are available through our Bedrock service to automate some of these use cases.

This architecture kind of breaks into three distinct steps going from the left to the right. The first step is where you actually use Textract to extract the information as well as to classify your document. The reason this step is really important is because for large language models, you need text as an input and the OCR or text extraction happens here. You also want to typically classify your document in this step because knowing the document helps you tailor specific prompts for your large language model to perform the task that you need. So for example, the task that you might want to perform on a pay stub document might be very different from what you want to perform on an ID document.

Once you go through this first step, you go to the main worker step, which is the second step where you're actually extracting information, normalizing it and performing these advanced tasks like summarization normalization using these models here. You can either use some of our existing services right out of the box or channel that data to Bedrock and use any of the LMs that are available through our Bedrock service.

And finally, once you're done with this second step of data extraction and enrichment, you can add a human in the loop process where you can verify the output of these models and ensure that you're getting the required accuracy and then send them for downstream applications.

Now, I'm going to switch gears again and actually show you a demo video. It's a demo where we have set up where we have implemented such a simple architecture to process a document and we're going to do three things on that document.

We're going to:

  1. Understand what kind of a document it is, that is generally classify the document.

  2. We're going to extract specific piece of information, but go as far as normalizing it into a specific format that we want to achieve.

  3. And lastly, we're going to actually implement one of these higher order use cases like summarization using a large language model.

Ok, so the document we want to actually process or want to actually use for our demo is this sort of an earning statement also called as a pay stub. Typically, this is something that everybody uses and probably is familiar with.

Now, we want to identify that this is actually a pay stub without actually knowing that - we want the model to identify it. We want to extract specific pieces of information in those red boxes - there is the earnings table that is shown over there as well as the gross pay.

And we apart from identifying this, we also want to actually summarize this document in the form of a two line sentence asking it for all the key pieces of information that someone would want to know to actually implement this.

We set up a Step Function workflow which you will see next. This Step Function workflow has a bunch of steps which I'll walk you through.

The first step is actually a document splitter step. And the key here is we take a multi page document and split it into individual pages so that we can send it to the Textract Sync API, which is the next step that you see over there. And the goal of the Textract Sync API here is to do that first step in the architecture which is extracting the raw text from the document.

The raw text is extracted and can be stored here. I show you the raw text - I've just copied this into a text file and this is the exact raw text that you see in the document that was the pay stub.

Now, once this is done, there are two parallel branches that we execute. The first branch on the left is where we index this information in Amazon OpenSearch. I'm not going to focus on that because that's just for indexing. What I'm going to focus on is the right hand side where you actually implement those two steps of classification and post classification.

There is a specific extraction and enrichment step that is implemented. All of these classification and enrichment steps are being powered by Amazon Bedrock service and the LMs underneath it. These models require not only the text that I show you that we'd extracted but specific prompts to actually achieve this classification and extraction task.

These prompts are actually stored in a DynamoDB table and essentially what the workflow does is it based on the step it picks the prompt and then invokes the specific model and sends that as input.

So here, I show you the DynamoDB table where you can see prompts for classification, for pay stubs, for bank statements that I have actually keyed in. And these are what I used in the Step Flow.

I'll walk you through each of these and show you how these impact and the interim outputs that it creates next. So let's start with the classification steps.

So I have a prompt which I've created, which is I'm asking, I'm asking the model to say, hey, given a document, choose between three documents, either it's a bank statement, a pay stub or classify it as something else and return that in a json format with the key being classification and the result being the result that you want to show how this goes.

I actually open up the bedrock playground because I can actually show you how these prompts act. I'm going to copy over the prompt over here. I copy it and i put it on the, on the bedrock playground. And you can see it's the same prompt which says, which asks for classification at the end.

There is, there's a place where you can provide the document, text. So I go and I copy the document, text from the text file and I paste it here and I click on run and while it runs, this is this is exactly the same process that happens in the step function workflow. I'm just showing it in a visual format, but you can see that the bedrock model basically takes that prompt and it's able to identify that classification.

This is a aws space up. What's really wonderful here is if you look at the document, the word pay stub or aws is not mentioned anywhere we have asked it to match against a specific type and it's automatically able to see that earnings basically is a pay stub.

Let's go to the next step where given it's a pay stub, we actually try to do some extractions with it. So there is an extraction prompt here where I want to extract the gross pay from the document, the white gross pay. And I'm gonna do the same thing where I'm gonna copy this prompt, port it over to the bedrock playground and you can see it's the same prompt where I'm asking for ytd gross pay here.

If you look at it, I'm specifically saying that I'm looking for ytd underscore underscore gross pay, which is the key name. And I'm asking you to fill the value I put in the document text and when I run it there, you see it says ytd gross pay and the value associated with it.

Now, if you see this is the red box at the bottom, you can see gross pay is the row and there's year to date at the top of the column and it's, it's triangulated the value 134,000, 134 million odd cross pay. This is pretty cool because it's, it's understood that ytd is the same as year to date in this case. And it's able to extract that.

Now, let's try and extract the earnings table over there, which is the one right on top. Um for this again, i i've already crafted a prompt which can help extract this. So we go to the bedrock playground and you can see the prompt i have is that for this document, get the earnings table, but only two specific columns, the earnings description and the ytd pay in a csv format. And i only want the csp, i don't want any explanations or any other text which typically lms can give. So i'm being very prescriptive in the format that i wanted. I supply the document text again into this and run it. And you can see it starts populating those values. So there's an earnings description column and then there's a yt pay and then it's able to pick out the lines like there is the regular earnings, the wellness, the and the awd things. That's pretty cool that it's able to extract those two things.

Let's let's do the last step, which is summarization here. I ask it to generate a summary and i say, hey, i'm looking for key information, the gross pay, the net pay, the deductions, the dates on which this space was generated and as well as any other key information, let's see what it does when you actually run it now.

Ok. As you can see, the model is able to provide a very succinct summary, so it says the gross pay of so much, there's a net pay of so much and there's a deductions of so much and it's also able to provide some period dates from and two dates of the space stub as well as some federal tax, as well as 401k contributions.

Let's see whether this is correct. So as you can see the gross pay, it's, it's actually found out that 6.6 i can't read it from here, but yeah, 6300 is the gross pay. It's taken out the net pay which is 4405. The deductions is actually not an exact field that exists in this document. It's a calculated field where it's 6.3 minus 4.4 and it's automatically done that and provided you the deductions value. So it's, it's going way beyond extraction here and doing calculations. It's also found the pay period and, and figured out the exact specific federal tax and social tax that you are interested in.

So yeah, in summary, what i've done is i've taken i'm feeding in a document, i have identified the document type. I have extracted very specific piece of information and i have added on some intelligence on top of it using my generative a i models to generate a very succinct summary of it and very specific data points on top of it. This is something that just could not have been done without these, without the latest advancements that we have seen in generative a i and customers are immediately using this to power their different business work flows. Ok? Awesome.

So next, I'm going to hand over the mic to russia. Russia is going to talk about how sent is actually revolutionizing their document processing work flows using a i. Thank you, nev.

Hi, everyone. I am rosa o krill. I am director of machine learning and a i at c in corporation. And i am really excited to be here. What a wonderful audience. So i'm here um to talk about a i mission learning at centene in our journey and how we accelerated the pace of a i at centene in partnership with a uh with aws using various a i services and other services. But before i do that, i want to do a quick poll in this audience.

How many of you are from healthcare industry? Ok, great. And how many of you are uh data scientists, mission learning engineers, cloud engineers, practitioners. Awesome. And executives in the room and executives. Awesome, great.

So um i'll be uh hitting on three major points during my session here and uh most of you will find it very interesting. Um uh given the variation of uh the um areas that you all come from.

The first point i'm gonna hit on is um i'm gonna talk about the massive opportunity that exists today in healthcare industry in being able to automate the document intake processing system in healthcare industry. And sin is not unique to that. So i'm gonna talk about the opportunity that lies at centene. It might be relevant to a lot of you as well.

The second point i'm gonna talk about is the strategy that we used to accelerate our pace of a i in automating these documentations, documents that comes into centene. And i'll talk about two major products that we have re released over the course of one year that is improving the provider and member experience significantly.

The third point i'm gonna talk about is gen a i obviously have to address the buzz, right? So i'm gonna talk about the prospect of using uh the future prospect of using gen a i at um not just in te but in healthcare industry and my own um considerations of how we should be able to use generative a i and the kind of precautions that we would need to use uh or take while using it in healthcare industry.

So with that, i'll jump right in 17 um we transform the health of the community, we serve one person at a time. So we're able to do that by being one of the largest medicaid managed organization in the us, we provide affordable and high quality products to nearly one in 15 individuals in the us. So it's massive right here are some interesting um stats of the company. We are one of the fortune 25 companies. Uh we have a really large employee footprint. 67,800 um is the count so far, we are um generating revenue at large scale um 37.0 37 billion as of recent um cycle. And so there are other interesting um you know, stats around 17. So it's massive, right? We can really make large impact in our members and providers experience.

Let me take a quick step back. I first want to build my credibility. Um just so, you know, that whatever i'm gonna say next is not a total lie. So um i started my a i mission learning journey about um 13 years ago uh when i decided to do my phd uh at university of florida focused on mission learning and neural net neural network was one of the key topics that i focused on back then. Uh anybody from university of florida gators, anybody go getters.

Um so i graduated with uh my dissertation focused on uh supervised machine learning algorithms, kernel methods. Um and then i decided to go to the industry and leave the academia behind. And i um started off as a data scientist back then. It was one of the sexiest jobs. Nobody knew what data scientists were supposed to do. So i joined weather company that got later on acquired by ibm. And i got an opportunity to work at ibm watson products, rolled out some a i products.

Um some of them are still out in the market. Uh learned with amazing sets of people um work with engineers and data scientists. And then i decided to change um the, the, the space that i worked on, i was working on the id space. And in the aviation space, i wanted to go to the healthcare. I joined cent corporation. And that's where we started our journey of a i and machine learning at sent about three years ago uh where we started building mission learning products um and stepping a little bit away from data science for insights and going more into automation.

Um so here um i'm working with an amazing set of individuals. We have a team of um machine learning engineers, data scientists, um uh business system analysts, cloud engineers. So they're an amazing team. And what we do here is we build a i products and a i powered products. So we are building world class on-demand a i applications to serve our members and providers.

We build a i products and a i powered products that provide automation recommendations, real time insights to improve our health care processes. And we contribute to centene sa i center of excellence. The way we do that is by focusing on structured data as well as unstructured data, structured data where obviously data is in various kinds of database tables, spreadsheets and unstructured data. Where we focus on a lot of text data that comes through faxes, images, mails, photographs and futuristically be audio and videos.

So challenges with document processing. As n a pointed out earlier, there is a massive volume intake of document that comes every single day at sent like any other healthcare companies do, right? These documents comes in variations of formats

These are correspondence documents that make major appeals, authorizations, right? These are very important documents filed in by providers and members and impacts their experience significantly. And these are complex in their forms and tables. They have variations in formats.

So what happens today is there is intense labor that goes into converting these documents into meaningful information. Somebody literally is sitting there opening their mail, reading through those documents, hand typing every page, every page and content of that document into the system. And in doing so, they are getting frustrated because they know that their skills can be better utilized doing something else, rather than having to hand type and do that laborious task.

So there is resulting in high turnover and obviously it is error prone when somebody is sitting there doing that day in and day out there is massive error as well. So what that does is impacts the process, right? The volume is increasing by day. There are errors, it takes time to process every document increasing the SLAs, it impacts the members and providers adversely. And there are compliance requirements that also need to be taken care of being the healthcare industry.

So there is a massive impact of this process that exists today. So given that challenge, we had to do something about it. And the way we wanted to do something about it is it was high time for Centene to build its own intelligent document processing system in house.

We could either build it from scratch, right? So that is the first thing that we looked at is that we have amazing people, amazing skill sets, we can do it ourselves, but it would be time consuming and we would also need flexibility to customize it.

The second option that we looked into is can we buy it? Can we buy it off the shelf? That will give us speed. We would not have to build things from the scratch. We could just use prebuilt functionalities. But then being the health care industry, again, it does come with the need to have a lot of customization. And so we would not be able to tweak those off the shelf models or off the shelf solutions.

So finally, we did a partnership with AWS and we decided to take what's available from AWS, not just from services and infrastructure space, but also from machine learning space and add our own customizations to it so that we can build our own models wherever that additional customization does not come from AWS. It gave us speed, it gave us flexibility. Obviously, there's going to be a need to maintain it long term, right? But that's the name of the game.

So with that said, here's what we did next. We started off building our intelligent document processing system, Centene by leveraging AI services. We started with the first layer of this system which was focused on serverless architecture using event driven architecture using lambda functions so that we don't have to worry about paying it when we're not using it. We don't have to worry about maintaining servers. So it's easier that way.

The next layer we focused on was again security, right? We use functionalities like Lambdas have these privilege rules. Data is encrypted at rest and in transit because data encryption is really needed being that the healthcare industry needs to be highly compliant and secure. We use Amazon S3 server side encryption. We ensure that all the traffic goes through our VPC endpoints and we use Synchronizations, Prisma scans for vulnerabilities and controls.

The third layer is important - the reliability, the long term aspect of it, right? Scalability to process documents capacity to increase or decrease as needed based on the load. Some days, we have x amount of documents as the next day, we might have twice that document volume. Use of GitLab CI/CD pipelines to automate our deployments and integrating with ServiceNow. So we have a robust product support system for the future reliability.

Intake was another major part of it. Different kinds of businesses even within an entity, they have different ways of taking in documents. A lot of them come with mail, a lot of them come from fax. A lot of them also come from emails. And so that would be our business partners who would be like, can you build something that can directly take the documents from the emails? And so defining what that intake mechanism should be was important for us.

So we had intake mechanisms where you could take emails directly. We had leveraged API Gateways under 10 megabytes. We have also used S3, Presto and Glue for anything that's above 5GB to intake it through S3.

And finally, this layer, this is where the heart of it lies or the brain of it lies - leveraging analyzing existing solutions, whatever comes from AWS. The Textract was one of the key components - Textract forms, OCR Textract tables and queries. As Nav pointed out earlier, these were the features that we use but then we had a need for additional customization where we had to build in house machine learning models, train it in our own data set, make sure that it can classify certain documents into certain types that was relevant to Centene and Centene's providers and members, right? These are the things that we did in house and we have built it in this layer.

And finally, the ability for us to learn from the feedback was very important. It's very important. Every time a user is getting the outcome, we need to be able to gather that feedback and bring it back to the model to retrain it. Talking about generative AI and if we start using some of these generative AI capabilities, this would be the layer that we would add it to.

Finally, the optimization, right, the everything is built with the software in mindset and we need to be able to monitor not just the software components of it but also the machine learning models, components of it. So there are various kinds of metrics that we would need to track related to Lambda, related to our services related to our models.

We have to prevent overuse, we had to set alerts at various points to ensure that no boundaries or limits are crossed and there are notifications that are sent as forms of alerts and ensure that it's more modular.

So holistically, this six layers of intelligent document processing system that we built at Centene. We built it in a way that we were powering one product at a time and in doing so, we were building it ground up.

In the next few slides, I'm gonna talk about the use cases that enabled our IDP at Centene to be built. The first use case - so we had to first build the momentum, right? And we decided to pick a use case that was not very much volume intensive. So we picked a use case which has an intake volume of 60 to 80 pages or 60 to 80 emails daily and it's related to provider and member experiences.

So we would otherwise the business partners currently were using manual copy and paste mode to enter the data into the spreadsheet. And in doing so, they were spending tons of hours. So it was very difficult to meet their SLAs and they were getting penalized.

What we did was we use the Centene IDP system to automate the ingestion and reading of the documents. These emails would directly land into our S3 buckets. We apply our Textract solution and additional machine learning solution to automate that and get the outcome back to the business partner.

There would be checks needed by human, which would be the experts at the business side to ensure that there are no mistakes being made. The result of this was significant. It was reducing the manual labor by 80% which means that it was reducing the processing time by 80%. And it was automating the task of hand typing these spreadsheets by 100%. And in doing so, we had our utmost focus on ensuring that the accuracy remains above and beyond 99%.

The second use case we chose was a much bigger one at scale because we tried it, we know it works, it's generating value. Then we went ahead and did apply this for a use case that had a much massive volume of 2000 documents intake daily and every document could have up to 1000 pages each. As the previous example, there is a lot of manual process that goes into that as well where individuals at the mail center, they are hand typing these documents, right? There's a lot of error that happens in that process. And also a lot of important information could get missed increasing the SLA and impacting the member experience adversely.

So what we did was we again use the IDP system at Centene, we automated the documents of these the intake of these documents using our system and only use human intervention when we knew that the accuracy was not on par. So again, the result of this is also staggering, we have a plan to automate up to 80% of this document intake. And currently we are close to 50% automation. And in doing so, we're ensuring again, through our tests and validations that we are hitting 99% and above accuracy, there's a significant reduction in processing time which is 93% reduction in processing time, which improves the SLA significantly and it improves the members and provider experience massively as a result of that.

That's awesome, right? So with this, what we did was we identified a lot more use cases, right? We started talking more about it with our business partners. And in doing so, we have identified several use cases where we can actually use the system and automate the intake process.

Now let's talk about generative AI and how we could use it or intend to use it in this system. What we are focused on is automating a lot of processes or written processes that exist today in the healthcare space. And we are so much focused on automation. But while doing that, we cannot lose sight of accuracy and the need for human in the loop, which means that having a human or a person review the outcome that has very low confidence or not high confidence or accuracy, right? Ensuring that we have strategically placed human intervention in the process that we are going to apply using generative AI is going to be an important thing for us.

So as we talk about generative AI there, from my perspective, there are three areas where it can start delivering value for healthcare space. The first one is search, right? We're all aware of the capabilities generative AI has in terms of being able to search for information very fast and providing relevant information. In most of the cases, there are massive sets of documents where people have to read through several pages, 1000 pages of documents and decipher key information out of those documents. And that's where we can use generative AI.

The second one is summarization. Obviously, we all know about that capability as well that generative AI carries in various forms. In healthcare industry, again, it's important for individuals to get the relevant information from the right set of documents and summarization can be a big part of it.

Finally, the third one is the ground truth aspect of it, generating ground truth from a document so the model can be trained on that is going to be a good use case for generative AI as well because any machine learning models that we would build to customize them to meet our own needs, they would need ground truth, right? They would need the source of truth. And if we use generative AI to recommend the outcome to the end user or the end user can validate that and help in annotating that document faster. That's how we can use or create this ground truth and train our models.

But in doing all of that, what we do need to consider is that healthcare industry is a very much high risk, high reward space. We wanna make sure that we're not making any inaccurate decisions because it's in turn going to impact the member and the provider's experience, right? Not even one member's bad experience is worth taking that risk. And so accuracy is going to be the key in healthcare and ensuring that as I said earlier, having human in the loop is going to be the key for ensuring better service for our members and providers.

The second one is healthcare specific LMs - if we can build healthcare specific LMs where they are customized to our own needs, that would be the other one. And in doing so, we have to invest the time and the effort and the money in building that framework that helps us reduce hallucinations, right? And so how do we create that framework that enables us to compare the outcome of LMs against one LM against the other, validate it, ensure there is accuracy. That's going to be another really important thing for us.

And finally managing the cost and risk, right? Building our own LMs or even using the generative AI technology, it's not cheap. So how do we continue to create the value and how do we continue to de-risk this space? It's going to be an important thing for us as well.

So finally, there are some links here to the resources at the end and I'll call Nav back to the stage so we can take some of your questions. Please don't forget to take the survey that's in your mobile app. Thank you for being a wonderful audience. Thank you everyone for joining.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值