Get the most out of your data with ML-powered search

All right, welcome everyone. I'm Jean-Pierre Dodel or JP, Principal Product Manager for Amazon Kendra. And today we're gonna talk about getting the most out of your data with ML powered search.

And uh to add some real world perspective on this conversation, today, we're excited to have two guest customers today. Uh Hemant Modi, Senior Director of Product Management at Qualtrics and Jomo Stark, Senior Director of Innovation at Orion Health, uh who will share their experience and um insights with Kendra.

All right. So let's get started.

So when we talk about ML powered search, we're really talking about smarter, more intelligent search here. But in order to define this concept, let's start first by defining what the problem is. And uh it's no small problem. I mean, this is a massive challenge that 80% of the data in the enterprise is unstructured today. So this is your documents, your help websites, uh manuals, PDFs, all that stuff is being generated at an exponential rate today is unstructured.

And when we talk to our customers about their experience with uh conventional search technologies that they use today to search through that, we hear a lot of frustration from them. And this ranges across the board. So inaccurate keyword search when they're trying to find answers in that content, complex implementations and length implementations. A lot of manual tuning to try to get that relevance, right? And a lot of ongoing maintenance, a lot of frustration.

So how does Amazon Kendra address these challenges with intelligence search? So we broke down that problem into these two areas. The first one is addressing the accuracy, right? We want to make sure that we made it easy to find what you are looking for.

Um so in order to address that we have four main pillars in there, that we focused focused on. The first, we wanted to support natural language queries. Uh this makes it a lot more intuitive for searchers to find their content, ask more specific questions. It's just more intuitive overall.

But in order to support that we needed to, to create an engine from the ground up that leveraged state of the art natural language, understanding machine learning in order to understand the nuance of language and find those answers. And that's why we created three major models underneath the covers that address and find answers in different ways.

The first one is these instant answers with this a ability to actually find uh answers. When you ask questions, this is the question answering experience that we'll show you uh with Kendra. Uh the second one is if you have FAQs in your enterprise. And you're curating those, we have a dedicated model for FAQ matching where we find the best matching question to give you the curated answer back.

And we also have document ranking, semantic document ranking. So a deep learning model that just tackles uh that particular task of giving you documents back. And the combination of those three models gives you a pretty unique experience.

The third piece that was really important for this experience is we wanted to make sure that Kendra had broad domain expertise so that it would have high accuracy out of the box without tuning. So we pre trained it on 14 domains including healthcare, life sciences, financial services, legal, marketing, energy, et cetera.

Um and then the fourth component for accuracy is we wanted to make sure that Kendra had a way to uh improve uh its accuracy over time automatically. So we have out of the box, a feature called incremental learning that looks at the user feedback and the user interaction with the engine to self adjust and self correct uh its relevance based on those trends.

So having addressed that first accuracy or find making it easy to find the the information. The second challenge we wanted to address is this complexity of implementation.

So in order to do that, we focused on creating key accelerators, the one, the first one is making it easier for customers to just ingest all that content from all those pockets of knowledge in the enterprise. So we focused on creating connectors and also partnering so that um other uh trusted partners could, could bring their connectors as well. So making it easy for customers to aggregate this content was really pri uh primary goal for us.

The second piece of an accelerator is as this content is be is flowing into. Kendra wanted to provide a mechanism for customers to be able to augment that content or preprocess that content, whether it's, whether it's transcribing or oc ring or whatever you want to do to that content as it's flowing in, we have a mechanism in there to easily preprocess this content before it gets into the index.

And um the, the, the third important one is we want to provide a mechanism to or a tool to create fully functional um applications uh without any coding uh necessary there. And we'll show you an example of that.

And of course, the third element here uh for the service is security. All of the data that's pulled from your repositories is encrypted in transit and at rest. And we also provide token based access control so that when users are searching, we know who that person is, we know what access rights they have and we can filter the search results based on their access rights.

And of course, we have an integration with im identity center uh to integrate with your identity providers to provide single sign on so that all of that experience is secure.

So talking about these connectors, these are some of the key connectors that we offer out of the box with Kendra. So you can see it spans across a wide range of vendors and we build the connectors that our customers tell us to build that they need the most. So you can see it's, it's, it's across the board from uh SharePoint to uh uh Confluence Alfresco Box, et cetera.

And um in partnership with uh some uh experts out there that have developed also other connectors, we have over 100 connectors available in our connector library today.

So what does intelligence search really feel like or look like? Um well, the answer is pretty simple. I mean, your end users really want accurate answers faster. That's really what it is. They want to go back to what they were doing. Uh finding information should not be a difficult task.

So if you compare the traditional experience on the left where if I'm a new employee, I'm looking for the IT support desk and it i type in those keywords in there. Uh this is actually a screenshot from our own uh previous um internal search where you know, you would get keyword matching and a list of documents that tried to come up with the documents that had those keywords without really understanding what that question was about, that you were looking for a location or where that, where that IT support desk was compare that with the right hand side with intelligence search where you can actually say where is the IT support desk? And we understand the meaning of that question and we can respond with a location first floor that was extracted from a totally unstructured piece of content, the HTML page actually in this case, and we provide um a little bit of a snippet or the passage where it was extracted from right underneath with a link to the document. And some FAQs if you have FAQs, we would display FAQs there. Those that are highly confident for that. For that query, a totally different experience, you get your answer, you go back to what you were doing. You don't have to click through documents.

So looking a little bit deeper into that experience, uh I talked about three major models before and this is how the experience is um is delivered to your end users. That first floor answer is uh provided by the instant answers model or this is the reading comprehension model that can extract words, phrase sentences, passages from any unstructured piece of text that you have.

And then underneath that is we call this the Kendra suggested answer or the instant answer. Underneath that if you have FAQs, the FAQ matching model will display the high confidence answers uh so that you can have a compliment to that extracted answer. And if you don't have an extracted answer at the top, if Kendra is not confident enough because the content is not there, then you would not see that box. But basically, if you have FAQs, we would display them.

And if you know, and at the bottom are the most relevant documents for that search. So you can see how we go from very specific to document level to bring this rich experience to the end user.

Now, Kendra, like I said before has really high accuracy out of the box without tuning because of this pre training and the massive models that are powering it. But if you wanted to tune your relevance, um there's two options here.

The first one, the incremental learning feature that comes out of the box with Kendra. Basically what it, what it does is that it takes all the thumbs up and down and the click through data. So as users search and click through documents, all that data is fed back to Kendra so that it can promote documents to the top that are preferred by the end users. And the other approach that's completely automatic and there's no machine learning expertise needed there.

And the second approach is to actually uh we allow customer give customers the ability to provide more, fine, fine tuning, more fine grained control and influence over that um relevance uh ranking algorithm. And to do that, um we provide some sliders. So let's say you have SharePoint repositories, Confluence uh file shares and others. And um you know, your SharePoint repositories happen to have more authoritative data. And you can very easily with these sliders say, well, my SharePoint content, I want you to weigh that a little bit higher than my wikis, for example, or file shares. So we provide a very simple mechanism to boost content in this way based on data source or a particular author or document freshness, the more recent the document, the more relevant if that's applicable and other criteria and you can do this at the index level so that everybody benefits from these rules or you could do it more based on the context.

So if you have an engineer uh logging in and searching, you may want to boost more GitHub or documentation type content versus you know, more general uh sales or HR related content and vice versa. You could do that in at, at query time and of course, you can import also your custom synonyms to expand Kendra's understanding of your own vocabulary in your business in terms of top use cases, how customers are using Kendra today.

So the first one is um focused on the employee experience, right? So this is creating an enterprise wide search application that searches across all of your repositories, creating a single point of search. That's one approach to that others will create, you know, department specific uh search applications for HR portal or an R&D portal that searches through very highly technical information for engineering work, etcetera, you can divide and conquer or you can create a single search experience with Kendra.

The uh the second category is more outward facing. It's more of the transforming that customer experience. So this is basically, you know, uh improving the call, uh call center agent assist workflow, right? So that they can actually attend and and serve their end customers. The callers more efficiently quicker, reduce that call time. But it's also to power self-service virtual agents or chat bots, you know, chatbots are typically configured to handle specific intents. But when they don't recognize that intent, they can actually call Kendra who has that has this enlarged capability to answer any question against your unstructured content. And you can also of course replace your current engine in your regular website search with uh with intelligence search as well to change that self service experience for your customers.

The third use case here. And before I go into that one, this is where Qualtrics will actually share some uh some some insights there where they've integrated um Kendra and Slack to streamline, their support to workflow and other components. They have so very interesting story there.

Um and then that third use case is embedded search. So this is ISVs and SaaS applications who want to improve their overall core product offering by replacing that search uh solution they have there with intelligent search. Um and this is where Orion Health will share their experience and what they've done there with Kendra to improve that core offering.

So what does it take to get Kendra up and running? So it's basically a three step process. You start with ingesting your content. So you can see on the left hand side, number one step one is data ingestion. This is where you configure your connectors to your repositories. And um if we, if you don't find one of the connectors there, out of the 100 that we have there, we have a custom data source connector. API that you can use to push your own content out of your own homegrown data sources into Kendra as well.

Um and as that content flows into Kendra and by the way, these connectors you can schedule so that they automatically synchronize with your repositories and uh updates the index accordingly. So as this data is flowing through the ingestion, there is a um a feature here that we call custom document enrichment, which is right here. This and on that side right here. Well, not bad.

Um basically that allows you to preprocess the content as it flows in. So let's say you have a lot of PDFs with scanned images um or you have video content, uh audio content, you can call external services through this feature. So that you know, in this case, Amazon Textract uh will OCR any scanned images in your in your PDFs or documents? Uh Amazon Transcribe, can actually do the transcription from audio and video. Amazon Comprehend could do um entity extraction, right? If you want to pull out people, names, company names, locations, et cetera, or your own classification as well. If you want to classify the content, you could call these external services uh to augment that content as it flows in.

So once you've enriched the content, which is optional, you don't have to do it, but it's there for you. Once you've gone through that stage, you arrive at stage two here where your content is readily available for searching, you can optimize this. So like we said before, you have incremental learning that's available.

Um and you can also adjust relevance tuning if you want to fine tune that to reflect specific um weighing of, of data sources. And then you have your custom synonyms to let Kendra learn more about your particular vocabulary.

Once you've optimized this, then you can deploy this uh to your website to your self service chat bots like we said before or your contact center integration uh or a s a solution. And uh Kendra is uh implement via the APIs. So all the functionality that you see in the Kendra console is available via APIs.

Uh but also uh we provide this uh low code, no code UI builder to create a fully functional search experience. Um and we also offer some search components that you can grab and you know, piecemeal into your own uh framework if you want to. So we offer different options there.

All right, in talking about this experience builder, um this is the feature that allows you to build these fully functional search experiences very quickly, right? Without writing any code in the console in the Kendra console, you can create a new experience and you can configure that experience by you know, selecting which widgets you want on your search experience, the search box, the instant answer box, you can configure what it looks like what metadata is in there. So you can really customize this to quickly launch and then you can integrate it with um your um IDP and your uh you know, access control so that this is secured for your end users and it's fully uh fully managed. So you don't have to, you know, create an AC two instance and and drop anything in there, you just create the experience and you can share it with your users.

Another important feature that we have in Kendra is the analytics dashboard. And this is really critical to understand, you know, key quality and usability metrics. It's basically to understand, you know, what are people searching for? Where is it working well, where is it not working well? You know how many searches am I getting that are generating zero results? Right. So this may be an opportunity to add more content if you're seeing people searching for things that they're not finding at all.

Um you know, one of the most popular documents that are being clicked on, right, and associating that with uh some of the confidence that we see here. So it's a very rich um reporting tool that allows you to look at different time periods, to understand the patterns and the trends there.

All right. So to kind of uh give you a wrap up of the value proposition of intelligence search, there is two main things here. The the first reason is it's increased employee productivity and customer satisfaction. Because when you're basically presenting better answers, quicker answers that are accurate to your end users, it's time savings, right? They're not spending as much time looking for that piece of uh of, of content or answer.

Um and it's also cost savings uh because by making it faster call centers can actually reduce the the duration of calls and uh and and realize those benefits as well there. And in addition to that, if you have a reg you know, an application, a search application linked to any kind of regulatory compliance uh goal, then you know you're doing also, you're you're mitigating your risk there by offering a more intelligent search that gives you the right answers.

The other category is that it's a lower total cost of ownership with Kendra because it offers so much out of the box that you don't have to build, right? So it's a fully, you know, ML powered search service. Uh so creating your own intelligence, semantic search uh solution is really not trivial.

Um so it comes out of the box and it's fully managed service. You don't have to manage clusters, you don't have to uh manage models. Uh there's no machine learning expertise required here. Um so you can have a team, uh a pre lien team of people just focusing on business goals and building the application. We have built in connectors to popular repositories like we saw before

so that allows you to really go quickly in aggregating that content and making it searchable without developing complex etl jobs.

um and then the last one is you know that uh you know, the built in search experience builder that allows you to very quickly create a fully functional search u i uh that's fully secured for your end users.

so now what i want to do is um i wanted to actually show you a few examples of kendra's out of the box accuracy across uh a few domains. so you can kind of get an idea of how it performs without any tuning.

and the first example is we took a data set of about 40,000 highly technical medical documents that were created during the covid-19. um you know, research stages for vaccine, et cetera and we just index that content and we just tried a few queries there.

uh and if you ask the question, for example, when did covid-19 start an example of the answer in that content is that you can see that the the kendra returns the first clinical trials gives you a bold that you see. december 10 2019. that bolded element was added by the search response to tell you. this is a predicted answer to your question, right? the text doesn't have all that stuff bolded. it's just leading your eye to the answer and you're done, you're not clicking through documents to find a date.

another example with the same data, data set here. but this is way more technical, more specific, you know, is if you have a technical team of folks doing research, just goes to show you, you know how, how accurate it can be. we can ask the question, when is the salar very viral load highest for covid-19? it's a mouthful.

um but basically, uh kendra, you can see the answer is provided in the first sentence, right? it's highest during the first week after symptom onset.

and we compare that with the nih website and you can run them there if you want to. and you can see how uh you know, the answers are coming back. obviously, they have a, a conventional search engine there, keyword based and you see some, some of these answers appear um um you know, seem to be relevant, they talk about viral load, but it's you don't have an answer and they, those documents may not have the answer specific to, you know, when is the highest. so you'll be clicking through documents looking for answers.

uh and you may not find it there. uh similar with web search, you know, we type in the same, the same question there and we got some relevant documents back. but again, no answer specific answer. you still have to click on documents to find the answer.

uh this is an example with wikidata this time uh different data sets. uh again, no tuning. this is wikidata on drugs and their dosages and uh you know uh side effects, et cetera. and if you type in paracetamol adult dose, it prominently displays an answer 3 to 4 g because it's very confident that this is the right answer for you.

and it it it displays there in in the answer.

um switching to a different domain here, this is financial data. so we took some publicly available content on financial information and we just wanted to try out some questions about inflation.

um and so here we just typed in inflation rate which is not even a fully formed natural language query like what is the inflation rate? but kendra understands the meaning of the question. it's not doing a keyword search. it's actually predicting 5.1%. it's telling you right there that it is 5.1%. so it's answering the question and giving you a passage that's the most relevant for that particular answer, right?

um now, if i want to be more specific inflation rate in sri lanka here, kendra feels more confident about the answer because it doesn't have to assume that it's in the us. you're telling it where and therefore it's more confident in giving you that prominent answer 54.6%. because now this query is more specific and gives you the answer with the passage where that answer was pulled from with the link to the document. and you're done, you're not clicking through documents again.

this is an interesting example on financial data again. but um if you try running a query like this uh on a website that uses conventional search uh technologies that are keyword based. the default behavior is to actually for them to remove any extraneous words from the query and the content like the word, what the a uh uh or, and things like that are actually removed from the query, they're called stop words.

uh because they just want to focus on the key terms here, which is cp i and you end up with a query that just looks for cp i anywhere and you get a long list again of documents.

whereas here kendra understands the nuance of language. it knows you want to know the definition of what cp i is and it gives you the answer consumer price index. so this is key also to help people understand uh context and the engine to understand what you mean with your query?

all right. my last example here is searching through transcripts, video transcripts, searching through media. so in this case, we um what we did is we took some videos from the internet and we transcribe the videos so that we can index it into kendra and see what kind of answers we can pull from that.

so here uh what i did is i basically um uh i wanna ask for example, what is the effect of oil prices on supply chain? and i want to know if i can find an answer in the transcript and see how, then i can jump to the video to, to actually listen to the answer.

so let's look at this uh pre-recorded um example here and so here i'm going to actually click and enter the query uh effects on oil prices on supply chain. and we see that kendra brings back the answer talking about uh food prices and oil and supply chain. but i can jump directly on the video. i have a deep link there because i know where the answer is. let's click on that list of consequences for supermarket shoppers who have already seen food prices increase 7% since around this time last year. much of that increase can be linked to the cost of getting food from the farm to shelves. the same can be said for any other product that is moved by ship, train or truck. oil is one of those commodities that really feeds into every part of the supply chain and therefore impacts the inflation.

all right. so that wraps up my example, showing you a wide range of queries on a wide range of uh domains without any tuning and to continue our conversation with intelligence search. i'd like to welcome our first uh guest uh customer uh heat modi who will uh talk about uh his experience with kendra. thank you. and good afternoon jean pierre.

my name is hemant modi and i'm the head of real time programs at qualtrics. and i'm super excited to be here today to share a story about qualtrics kendra.

so who is qualtrics? we created the experience management category and we are the leaders in this space. we're a public company with over 5000 employees globally. and what we do is we help companies uh provide great human experiences. we want to make every interaction and experience that matters and how we do this is on the qualtrics xm platform which enables to collect understand and take action on both unstructured and structured data.

in fact, 85% of customer feedback is unstructured data and it comes from sources like chat and voice calls which are predominant channels in the contact center space. we know the contact center is an experience that is critical for every business.

so according to the qualtrics xm institute, after a bad experience, customers, 34% of customers will decrease their spending with that brand. and 19% of those customers will actually stop spending with that brand. a quick show of hands for those who have had a bad experience talking to a customer service agent.

well, the other side of this is that it's a difficult job for the contact center agent to listen to random questions from customers all day long in a stressful environment. that's why qualtrics wants to invest in the contact centers, agent experience and work flow space to help reduce costs, to boost agent productivity and to drive customer satisfaction and brand loyalty.

and we are committed to doing this by developing a recommendation engine that is specific to real-time agent assist solutions. we decide to do this with context aware relevant recommendations that will um that are based on agent coaching behavior models and knowledge based sources and that is the focus of the discussion today.

so why did we choose amazon kendra and what are the key features? well out of the box, it supports keyword based searches and natural language queries. this provides us specific answers to natural language questions by extracting relevant text from the source documents. it also learns relevance over time based on things like source author and document freshness. this relevance can be tuned on um on things like um the the results of the of the agent who are talking over the phone and learning over time on based on this usage as well it keeps track of analytics to see the quality of search results and also identify gaps in content.

furthermore, the extensive library of first party and third party data source connectors is also what we a key feature we were interested in.

so what are the benefits of amazon kendra? first of all allows us to kick the tires very easily. we are able to deploy a proof of concept very quickly. next, it's a huge time savings for us. otherwise we would be investing in a multiyear project to build indices and n lp query engines from scratch with the first party data source connectors that are available. the qualtrics development team is able to spend time on n lp problems instead of building custom scripts. it also features built in operational dashboards for configuration monitoring and troubleshooting as well.

uh it handles secure customer private data and of course, kendra with a k rhymes with kendra with a q.

so our first application of kendra with production users was with a developer project called wung or wst. and this is a slack channel that was developed for our qualtrics support reps. on the left, you see the slack interface with the different channels. and in this one, the agent has subscribed to washi tong cuny and then the agent is able to use a keyword trigger such as wst and then type in their search query which is change default survey options. this is sent to hendra which responds back instantaneously with recommended list of relevant articles.

so how does this all work? what is the high level architecture? so first of all, we have the agent who subscribes to the washi tong cuny channel in their slack interface. they type in a search query which is then sent to the qualex recommendation engine that analyzes the search query applies n lp and sends that on to amazon kendra.

now, amazon kendra has ingested content sources from the qualtrics support website slack channels and various google drive documents. and it responds back with the most relevant list of knowledge bases articles that list of articles is sent back to the qualtrics recommendation engine which then in turn, sends it back to the slack interface which the agent is using.

our next application was sending transcriptions of real time audio from the contact center to kendra nop to process and return knowledge based relevant articles.

so let's step through how this all works. on the bottom. we have the customer contact center infrastructure and on the top, we have the qualtrics aws cloud.

so first we have the audios oops. first we have the audios transcribed from the contact center and brought in the real time audio stream into this audio connector here in the q aws cloud. this is sent to a transcription engine such as amazon transcribe, that transcript is sent to the qualtrics nlp engine for enrichment and that analyzed transcript is then sent to the recommendation engine as i spoke about earlier, this is then the courts search query sent to amazon kendra which looks up the various different customer data sources and that responds back to the rle list of knowledge based articles. and then we have the recommendation engine that interfaces to the notification api which then delivers those recommended articles to the agent desktop.

let's watch a demo to see this all in action.

thank you for calling coal support. my name is mike mr smith. i see that you chatted with us recently in relation to a survey problem. well, i'm calling regarding a support ticket ending in 94113. my survey issue is still unresolved.

mr smith. can you verify your account number for me, please? yes, it's 703206. got it. and what is your issue? the problem is every time i add a question, my survey flow disappears. it's very frustrating that it's still not working at this point. my boss has gotten involved. i really need to get this fixed.

mr smith. i'm really sorry that you're experiencing this survey issue. let me check on this. so what you see is kendra's return back various articles from content sources. in this case, the slack channel and identified uh the search survey. so what i see is this is a known issue. we may have a fix, but i'm not sure i see.

uh i'm really on a time crunch here. is there anything we can do? let me check one more thing. um may i place you on hold for a moment? sure.

so now the agent will type in a directed search, renaming the survey and it's a known bug and i will follow up with engineering in the meanwhile, you can rename the survey to force it to resync and restore your survey flow. i will put a note in the support ticket 94113 and email you these work around instructions.

ok. that sounds like a good plan. thanks. you've been very helpful, explaining all this to me today. no problem. glad we could help. have a great day. thanks, bye.

and this is our version of uh automated call summary which summarizes the conversation that was helped with the agent and customer.

ok. so we found that aws were great partners to work with where they really understood our concerns. we had a tight collaboration in understanding kendra and implementing kendra. they also listened to our feedback and prioritized the first party data sources connectors that were going to be added to the product.

when we conducted our first initial pilot of the real-time agent assist solution from qualtrics, we found that there was an increase in the initial ticket resolution by the agents who were part of the pilot versus the ones who were not part of the pilot. also, we saw an increase in the ticket volume that was handled by the agents that were in the pilot.

um our qualtrics customer service reps were very impressed and super excited to use this real time agent assist product. uh they felt very empowered

Hammon Moley said:

"Uh even one of them said this tool is a game changer and it has a very intuitive interface. So what's next the road map map roadmap for us for implementing more kendra features. We're investigating the use of amazon kendra as a with the first party slack connector. Since when we first implemented this, we needed, we had to use our custom scripts. Now we want to replace them with added functionality and remove the burden from our development teams to support custom scripts. We're also exploring the use of the tuning functionality with integrated feedback API s and then we want to extend the qualtrics contact center tech stack with additional complementary support services from amazon such as amazon transcribe and amazon connect. And of course, we will need to support growth and scale as we add more customers with more knowledge based based content and we need to make sure that that content is separated across each customer. My name is hammon moley. I want to thank you for your time today. Hopefully, you understood the value of uh qualtrics venture into optimizing the contact center agent workflow. Great job. All right. Thank you for um those insights and uh sharing your story. And now I'd like to welcome our second uh guest customer. Um uh let me see here. I have a, here we go. Joe Most Stark um from Orion Health. Who's going to share his experience with kendra? You mistake me. Ok, great to be here. Uh yeah, joe m kenne stark ryan health. Um very, very excited to be here."

Joe Most Stark said:

"So orion health, we are truly trying to reimagine health care for all that is our, that is our mission. Who are we? About 110 million patient records are accessed pretty frequently throughout our customer base. We're 100% healthcare focused been in the industry for about 30 years. Um exclusively health care customers in 15 countries including 10 us states and the implementation i'm gonna talk about here today is not quite public yet, but it, we believe it's going to be one of the world's largest consumer facing digital front doors. Uh it's a province of canada and we are very much a software development firm. We've invested almost $345 million into the process. Half of our employees are software developers. So what's our, what's a digital front door to us? It's a consumer engagement platform and it's a, it's truly a platform, right? So we've got our technology, we use third party technologies, um best of breed and we did this to meet our customers' needs. So it integrates data and tools and services to provide a cohesive end to end social resource and health resource, navigation environment benefits goal improve consumer experience. So if you need care in this dfd, you dial 811 or you go to a website and this website is powered by the orion digital front door, faster and more efficient access to health care. Um, including the, you know, the, the, the kinder search that we're going to talk about that. We've built in as a core of our platform. We've got a symptom checker, we've got access to telehealth. There's a chat that connects to a 24 hour nurse center. Um, the goal really being to get people out of the, er, and get them the information they need and we quickly get them to the, er, in front of a nurse if they do need that and all of this ideally will make better use of resources and a lot of those measures are gonna come out shortly. Um, first phase of this went live in june 2nd phase, went live in october. Um, so we're going to measure all of this and we'll talk about that in a second, maximize the value of existing technologies. This is critical, right? You're not going to want to come in and tell people they need to rip everything out and replace it with a bunch of new things. They're gonna have some pieces, they, like, they're gonna have some pieces. They don't, and often they're gonna want to do a, a migration over time. Right. They're gonna wanna rule some pieces out, make sure it works, add pieces later. And really, it's the goal is to simplify health care transactions. Right. Really make it easy for folks to get to the information they need quickly regardless of their computer savvy or they're web savvy. Right. And lastly, and, but certainly not least is really, the goal is to address gaps in health equity, right? This is really democratizing the delivery of care and getting the care to the people that need it. I mean, we still have health care deserts, right? We still have um folks with low access to computers and, and mobile technologies. And so one of the big goals with all of this is to improve that."

Joe Most Stark continued:

"So the problems are resolving patients have many organizations to deal with. I mean, this is so if we're talking about the us, there is no central payer, there is no single payer, there is a a myriad of of health organizations and hospitals and clinics and and your insurance and you know, it's depending on where you live, it's very different. So it's incredibly complex to try to navigate that. It's still incredibly paper based, right? I mean, you probably went to a doctor recently and had them, they, you know, they know, you know who you are, they hand you a piece of paper and make you fill it out. It's still an issue, right? And the third bullet which really addresses the kendra um solution was, you know, don't really know who to trust for curated information and we'll dive deeper into that. And this last one waiting for staff to do tasks on behalf of the patient. Getting to the point where people can really do self service, right? Engage on their own without having to talk to somebody or wait for a call back or wait for a piece of paper or a fax, right? So this is the solution that we see a unified channel to to to the many organizations, a unified channel across health and across providers across organizations conceivably across pair. Although that gets a little politically challenging, but a single front door for them to get access to all this is ideal inefficient paper, digitized processes. We have the technology, right? This isn't something that needs to be invented, it just needs to be deployed and integrated and adopted, right? And then again, the third one, how do you fix this curation issue? Well, you give them access to verified clinical resources and that's where we started and we knew we needed to do and our client needed to do and started us down this road with kendra. And then lastly, yeah, offering self-service the ability to do this on your own without interacting with somebody is is incredibly important."

Joe Most Stark continued:

"So problem number one, too much information. Um you do a search for, i have a sore throat on a typical web browser. You're gonna get half of a billion hits right. 20 of those may be legitimate sources, right? You might get mayo and kaiser and geisinger and they're all, they're all good, but they're all different, right? And you're going to get some crazy sources too, right? You're going to get some stuff in there for, who knows where uh it is easy to be led astray by misinformation, poorly planned results. Um everybody's done this, it's very, very frustrating to, to do searches for conditions on a normal search engine. So kendra searches across a curator that our clients can curate their own sets of data across federated websites, documents, other sources, other medical sources and provide trusted medical and that's critical problem. Number two is noise. Again. How do you prioritize and rank the the remaining hundreds or thousands of hits that you're gonna get and really know what's relevant and who's to say what's relevant, right? So by amazon camera, by prioritizing resources from, from, from trusted sources, government health, um you know, trusted institutions above external sources and it really helps avoid this, this misinformation and this this dr google effect. So why kendra ion is a longtime partner of kendra? We're a big customer of kendra. All of our products are kendra or aws native. It's natural, it's a natural progression for us to, to explore using kendra as the centerpiece for our new flagship product, this digital front door and they offer also offered a really exciting six week rapid prototyping. They came to the table with with some some basic built curated searches to the fast iterative process. And we had a working prototype in about six weeks again and enabling people to ask natural language questions at the centerpiece and get them quickly to what they're looking for on a curated fashion, right? The curated web pages, avoiding the noise that you get with a typical search engine, streamlining the searching of information, connecting people with the trustworthy links relevant to the locality. You can do it again, you can it can be by city, it could be by county, it could be by state, right? Um be culturally sensitive, right? Depending on the audience you're talking about. Um and they're, they're exploring some of this in canada, some pretty exciting things coming out. And again, we love the continuous improvement of the system. The fact that it learns and gets smarter over time is, is pretty fantastic and really from our client's perspective, we're not the orion brand, isn't the brand that they care about, they care about their own brand, right? And so this really needed to be to bubble to the surface and and promote their brand and keep their branding and their colors and their and their and their content. Keep it trustworthy, keep it relevant, keep it local and keep it simple. And here are a couple architects talking about it."

Architect 1 said:

"How does the intelligent search in amazon can help with the digital front door? So that's one of the most interesting parts of dfd. So when we go on digital front door website, we have the search bar and in the search bar, the user is able to perform search across multiple things like health service organizations, um medical conditions, make bookings and actually search for the health provider that fits them. So in kra, we actually have the index um that contains a lot of documents, essentially, there might be millions of documents and it scales quite well as well. And that kendra index actually provides api s which we expose by api gateways to that search u i web component. Cool. And you see millions of documents. I imagine there's a huge amount of data in health care. How do you actually index all of that data? So what we do we use built in kra web crawler that is able to crawl static um resources that could be static websites, essentially the list of data sources that we um validate with clinicians and provide to the index. And then we also have the custom data crawler that we've built using lambda that is able to crawl dynamic resources, uh pull data out of them. It also uses dynamo db as a q and it loads those documents that it pulls from dynamic resources into s3 bucket and then from s3 bucket, kendra seamlessly pulls those documents into the index processes them and provides the search results to patients. So one of the cool things about kendra is also that it's natural language processing search. So whenever you ask a question in the search bar, it will give you the exact answer rather than providing you with keyword based search result. And that's how it works."

Joe Most Stark continued:

"So what's the business impact with the caveat that we don't have stats yet? Although they are shortly forthcoming to really focus on two areas, as i mentioned earlier, this reduced burden on the health care system, reduced reliance on the ed as primary care, right? This is a crisis particularly in disadvantaged populations, particularly in some urban areas. Um and so giving people good access to care and information, triaging them quickly, getting them to a nurse on a call center online in a couple of minutes, getting them quickly on into a telehealth visit or into a physical visit, um is critical. Um and then really this online symptom checking is, is is incredibly valuable when paired with the other pieces of this solution, having that on its own as a stand alone isn't terribly useful, but it's also connected to the patient's care and their record and their physician and potentially their, their, their institutions of health, it can be incredibly valuable and then telehealth support again, telehealth exists. How do you insert it into the workflow of a typical consumer or a patient? Right? We can measure and increase chats, calls and telehealth engagement for home care, right? Because that's really the goal of all of this is to keep people at home. And then the other side is the improving information to consumers, right? Increased utilization. These are basic measures we have, there have been more than 100,000 visits to the to this canadian dfd home page so far and they really haven't started promoting it yet. It's really kind of in a pilot that's going to change shortly and health engagement really. They, they were measured, they've measured a large increase in visits to this medical library that people weren't really using. So they're using it, they're actually using it now.

Looking forward, um we have a long list of upgrades requested by our customer. For the next version, there will be many new features being rolled in. I mentioned self referral. That's a big one. The ability to, to refer yourself to a to a, to a mental care specialist or a aaa regular specialist or um any number of other things is is incredibly valuable. We're looking to integrate amazon chime, create real time, real time video visits and the ability to do that and then continuously improve the search by increasing the number of data sources, fine tuning the relevancy of the sources and proving the accuracy of the results. And then you know, being able to search video transcripts themselves. So this this ability to search unstructured data is huge and that tied to the natural language piece is, is really we think it's a it's a platform for success.

Thank you. Lost a job. Thank you jomo for sharing your journey."

Presenter said:

"All right. Um so I want to take a few seconds to just share some of the exciting announcements that we had for re invent. Uh we launched two key features that are really exciting. First one is tater search and this is the ability now for kendra to return answers that are embedded in html tables. So if you have a table that has, i don't know like a credit card comparison table, uh you can ask questions about which the credit card with the lowest apr or if you have a specifications table, we can now understand what the table is and be able to pinpoint the data or the answer from the table and return it back to you. I'm gonna show you an example of that. And the second is we've um expanded our support for semantic search for an additional seven languages, spanish, french, german, portuguese, japanese, korean, and chinese. So customers with content in these languages will now benefit from the full semantic search experience, which is the question answering capabilities, those instant answers that we showed the fa q matching and the uh semantic document ranking features. So super exciting and uh just to show you an example of what tater search looks like. So uh here, for example, if you had this is a wikipedia table that we got from the internet, uh it has information about companies, their industry, their revenue, you know, a bunch of other information here in a, a format in an html page. Um so now when you ingest this kind of data into kendra and we wanted to, let's say, ask a question about, you know, what are the companies that have the the highest revenue? I want want to actually get a list of results from that. Here's what the experience is gonna look like now with tabor search. So i just enter my query here and there's no tuning. You just enter the query and kendra now returns actually a preview of the answer by selecting the right rows, the relevant rows. These are the companies with the highest revenue pulled out from a much larger table and we displayed down here also the extracted, you know, narrative answer to give you more context. But this is really powerful now because you're able to now interrogate or question, ask questions on specific tab read or content in there and get a rendered results like this. Uh that gives you the instant answer right away, but it now in a tab reader format. So very exciting feature.

Um so now uh getting started with kendra, just what we recommend our customers, you know, hey, how do i get started? Well, run a flash poc just to try for yourself with your own data. Very simple number one, you pick a use case like a website search or a document search. Um and then you determine what questions you'd like to ask. So you know, some keyword questions or some natural language questions come up with those and then you index that content very easily with the web crawler. For example, like, uh, they explain they in orion health. We have a web crawler right out of the box or you can drop content into an s3 bucket and just index that. It's very simple to get that index and you're ready to, to, to actually query there. Test your queries, run the queries there and compare the results with your existing search engine if you have one. And if you don't just run a quick assessment of the quality of the search of those search results with kendra and uh to get start, actually get started. Uh, you can go to the link here, amazon.com, uh edu waste.amazon.com/kendra and our developer edition offers a free tier of 700 free hours uh for first time users. So it's very simple to get up and running and test kendra with your own data.

All right. Uh well, thank you everybody for your attendance today. We hope this session was, uh, you found this session uh valuable and if you have any questions, we'll be happy to answer those offstage here or outside. Uh but again, thank you very much."

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值