Transform self-service experiences with AI and Amazon Connect

Welcome everyone. Uh thanks for joining us today for the opportunity to speak to you today about uh how to transform your self-service experiences with AI and Amazon Connect.

My name is Jim Kreitler. I'm a principal product manager here on the Connect team focused on our IVR offer and I'll be joined by two of my colleagues.

First up, Mike Nolan, he's a worldwide SA leader also focused on IVR and Marcelo Silva, who's a senior product manager for the Lex team is gonna recap some of the exciting launches that um that we announced this week.

So we're going to start off by talking about, we're gonna start off by talking about why conversational experiences are so important to customers today and then where to focus, have the biggest impact for your customers. We'll talk about how we're making it easier to delight customers and deliver these conversational experiences with Amazon Connect and then cover um uh cover the, the success story from DoorDash is they made the journey to conversational AI and their IVR over the last year.

Now again, you may have seen in the program that we were going to be joined by DoorDash on stage. Unfortunately, our colleague was called away on some urgent family business. So I'll do my best to relay that story to you.

And then Mike's going to dig into practical considerations that we've learned from our many migrations with some of our largest customers. And next steps for any of you that might be taking a similar journey in the future.

And finally, again, Marcelo will come up and recap some of the launches from this week, all the Gen II launches that are particularly useful in the IVR.

Now, before I get started, just in case you're earlier in your um in your journey with Amazon Connect or just getting to know us. Let me provide a brief overview of Connect.

Uh so Amazon Connect is a single customer experience application offering one seamless experience for your entire team and your customers. The reason we bill Connect is that back in 2017, many of our customers would ask us what solution we were using to provide customer service at Amazon as they were, you know, admiring the customer experience that, that we delivered. We heard this again and again. So in March of 2017, we made it available as an AWS applic to be used by companies of all sizes and across all industries today, just within Amazon, over 50 different groups use Amazon Connect to provide their customer experience. This includes Zappos Ring Audible for example. And of course, Amazon's own retail operation, which alone has over 100 contact centers globally and over 100,000 agents on the solution at any given time.

Connect is one of the fastest growing applications here at AWS. And today, tens of thousands of customers of all sizes use Connect to provide their customer experience hosting over 10 million discrete interactions each and every day.

So before I get started, I just want to thank you all in the room and thank all of the contact center managers, supervisors and agents that have trusted, Connect to help accelerate and transform their contact center operation. And it's the success of all of you that keeps us so passionate about innovation and focused on working back from our customers to deliver innovation that um that solves real world problems.

Alright, with that, this is really painful. There we go. So that to the subject at hand again, as I said, we always work backwards from our customers uh here at Amazon. And so what do we hear from customers in this area?

Well, for business users, everyone here in this room, we know that customer experience is more important than ever. Gartner tells us that 80% of organizations today expect to compete mainly based on customer service going forward. While PwC tells us that 73% 75% of consumers actually say that customer experience is now the number one thing that they consider when deciding whether or not to purchase from a company.

So all this is no surprise. Of course, to everyone here, we're all customer experience professionals, probably why we're in the room. So, um how are we doing? How's it going?

Well, unfortunately, in the study that we commissioned to AWS just last year, we found that 49% of customers reported actually having more bad experiences with customer service than in previous years. And at the same time, 46% said they would rather go to the dentist than call customer service.

Now, I've got some questions about how this survey was conducted. Was it like a binary choice check? A for go to the dentist check b for call customer support? That seems like a really strange question to put in a survey or were they writing in 46% actually wrote in rather go to the dentist either way. It's, it's a very illustrative stat. But as you dig, deeper customer preferences are, are often very surprising.

So in the same exact study, when the same exact customers rest, if you do have to call customer support or contact customer support, what's your preferred channel? The same exact customers overwhelmingly reported that they still prefer the voice channel, they still prefer the phone. And in fact, in 2022 more customers said that they preferred the phone than in previous years.

Now, this also is not a new trend back in 2021 ContactBowl reported that at least in the US, 75% of all interactions were still by voice despite all the digital channels that we use today and this continues. In 2022 McKinsey reported that 61% of CX leaders actually reported an increase in total calls last year.

So the message here is that the phone is not going away. It remains often the the front door for our customers and their first point of contact when, when they need help.

So of course, this is one of the reasons why the IVR is important. It's the first thing that customers experience when they contact your brand. And we want to evolve that to provide the best experience for our customers.

The other reason the IVR needs to evolve is that in, in many cases when we were greeted with these static menu touchtone or directed dialogue type of experiences, it really hasn't changed much in 20 years since the early days that the IVR was available to us and let's put a finer point on this.

Um this situation ii, I want to draw a parallel between the IVR uh or, or conversational experiences and um and digital or web and mobile experiences.

So on web and mobile, of course, our user interface is a graphical user interface. And our example here is Amazon's own mobile application and website. Um and of course, over the last 20 years with digital trans transformation, we've all invested a lot of time and effort to understand our customers' needs their preferences and to optimize these experience making uh great leaps and bounds from from where we started. This example is uh Amazon zone, one of our first landing pages. So big differences here.

Now again on web and mobile, that user interface is is that graphical user interface. But in the IVR for phone calls uh conversation is a user interface, right? And when you think about these static menu based um experiences that are still out there in uh in many cases when uh conversation is not available, they really resemble exactly where we were on the web 20 years ago, right?

Take a look at that, that early example of Amazon's website, you have hot links and headers that basically tell you, you know what page you can visit and then a description of what you can do there. This is not so different from when you're greeted by a static menu based IVR which is press one for customer service, press two for sales, press three for billing. And your experience is very much dictated by, you know, the system and its capabilities.

So obviously, this is not what customers, what we as customers expect today. And we shouldn't be surprised that these static menu based experiences are not exactly delighting our customers.

The good news is that today, it's easier than ever to deliver natural conversational experiences that customers are coming to expect. And it's working with customers like DoorDash that we've seen how Amazon Connects and Lex can be used to deliver these experiences more quickly and easily than ever before.

In fact, customers that have upgraded their menu based legacy IVR experience is conversational. Uh with Connect are seeing as much as a 69 to 91% decrease in the time that customers are spending in the IVR. I think that would make all of us a little bit happier. And of course, this makes sense because instead of taking customers through a long complicated menu, um they're able to just ask them how can i help you? And that's making things easier, but we shouldn't stop there in our effort to make things easier for customers.

It's also easier than ever today to integrate with our backend systems such as CMS and booking systems to personalize that customer experience and understand what they might be trying to do and proactively offer them options to solve that problem before they even have to say a word.

Now, a great customer example is TransUnion, TransUnion when they moved from their uh menu based touchstone IVR to uh natural language and conversational experience was able to decrease the time that took customers to navigate that IVR from two minutes to just 18 seconds. So 90 90% almost 90% decrease in the amount of time, they are also able to reduce their transfer rate by 50% and achieve the overall savings of around 40%.

And this also makes sense because if we're just asking customers, how can i help you? Customers are able to respond in a natural direct sentence. We're able to better understand exactly why they're calling before they get frustrated and zero out and get them to the right place the first time.

And as we'll see with the DoorDash example, these, these stats are, are usually pretty consistent, we see in all cases.

So now that sounds, sounds easy enough, right? Sounds logical, something all of us should be doing. Um but the IVR of course, is part of a much larger customer experience, right? A much larger customer journey and IVR modernization efforts have to both fit into that larger journey and be balanced with the other initiatives that all of us are focused on to optimize our customer experience.

So let's take a step back. Um and again, working back from the end customer, the consumer understands what this experience should look like.

So at a high level, our expectations of customers while maybe not that easy to deliver on are are pretty simple and understandable. We expect our the businesses that we work with to meet us where we are, whether that's on mobile web, physical store or on the phone to understand who we are, what we might be doing based on either our behavior or if we have a past history with the brand, if any offers are going to be made or ads, and we're only going to really likely to engage in the ones that are personalized and, and contextual for us.

And as we transition to the contact center, should we need service again to know me if i'm coming from your web or mobile experience to understand what i was trying to do there, understand my history with the brand to personalize those experiences and make it easier for me to get to the point i need to, to get the service i need um and uh and to move on.

And of course, if i do have to speak to an agent, we expect the agents to know not only about our customer journey to get there, but about any information that we've shared up into that point. So that we don't have to repeat ourselves right now.

Again, that sounds pretty simple. But as we all, especially in this room know it's it's not. And when we start to lay the business requirements needed to deliver on this experience, things quickly get complicated.

So for example, just to meet customers where we are at the start of any journey, um we need to deliver a consistent experience across web mobile search, social variety of channels. These experiences again need to be personalized and proactive if we want customers to engage in them. So we have to leverage 1st and 3rd party data to get the most accurate picture of our customers as possible. All of these visits and and what customers do need to be attributed so that we can measure what's working where our customers are and where to invest in order to best service. And then we need to do this across all the different types of journeys, marketing, sales and service, and across all of the systems and the organizations that support them starting to get a little bit more complicated, right?

As we transition to the contact center, again, of course, we have to identify our customers first in order to be able to personalize and make those those proactive recommendations, make it contextual for the customer. Oftentimes customers need to be authenticated if they're going to be able to do any kind of self-service or automation. If we're to to transact on their behalf from the system, the experience again needs to be conversational. It needs to be something that customers enjoy or are willing to work with. If we expect them to stay in that channel and actually either provide the information they need or to self serve. They have to believe that's capable, capable of understanding them and capable of actually allowing them to solve problems in order to keep them from zeroing out or just trying to get to an agent as quickly as possible and increasingly multimodal. This is not, this is as opposed to omnichannel, multimodal here, meaning if the customer can be served better in another channel, the ability to move them from one channel to another where we can provide a better level of service or more quickly, allow them to solve their problem.

And finally, when they do talk to an agent, the agents need to be informed, they need to be empowered with the right tools to be able to service the customer and the right information. All of these interactions need to be analyzed and assisted. So we can provide the highest level of service and constantly iterate on that experience uh to improve for our customers.

So this is pretty complicated to deliver on on the simple and entirely reasonable expectations of our customers. And this is why we're constantly focused here to Connect on embedding

powerful AI-enabled features directly into Connect to make it easier for you to turn them on, to experiment, to optimize and to deliver these new experiences, whether it's in the IVR or anywhere in the customer journey. This includes powerful features like:

  • Customer profiles, which uses AI to provide a single unified view of the customer and makes it easier to make these predictions and understand why customers might be calling you

  • Voice biometrics for authentication with Voice ID that uses passive voice bio to just authenticate the customer while they're speaking the IVR or an agent

  • High-quality, neural text-to-speech and natural language understanding from Polly and Lex. So they can provide and deliver these conversational experiences that we're talking about today.

  • Knowledge management and agent assists with Wisdom in our Amazon Connect, so we can get the right information to agents at the right time and allow them to best serve our customers

  • And even conversational analytics with Contact Lens so that we can constantly analyze how we're doing and adjust and improve the service, we're providing our customers.

And of course, as you probably already heard this week, we're really excited to, to introduce a number of upgrades to all of these features that embed generative AI directly into the services and allow us to and provide the guide rails so that you can experiment while delivering continue to deliver a consistent level service as you get to know how to, to use AI's and generative AI in your customer service operation.

Now again, Marcelo is going to come up and go into a little bit more detail on those in just a minute.

So with that, before I get into the DoorDash example, I just want to provide a couple more stats from our um that help us that, that are the ones that make us very, very confident that conversational AI is, is the right place to invest to provide a high impact for our customers.

So, first of all, from NTT last year, we learned that over 70% of consumers now say that they actually prefer automation and self-service over speaking to a live agent. So now is the time for us to start experimenting, getting these automated experiences, right? For our customers.

Secondly, when we work with customers that are still using or delivering these static menu based IVR experiences, we find typically that 80% of calls relate to just 20% of the menu options in the IVR. So the old 80/20 principle or Pareto principle applies very much here. And basically, if you've got 10 options, let's say in your IVR, 80% of your callers are calling for just two of those options.

Now, you can of course optimize that experience for those callers by just putting those two options as the first two options in the IVR. Unfortunately, what we also see is that the other eight options tend to be related to more complicated, more difficult problems cases where customers are much more likely to be frustrated and have less patience. So it's really hard to, to create the right balance to have an optimal experience for anyone in this type of menu based menu based um experience.

And again, with the customers that this is our own data, with the customers that have moved from the static menu based experiences to conversational where they're seeing a 69 to a 91% decrease in the time that customers are spending in the IVR and a corresponding decrease typically, you know, around 20 to 50% in their transfer rate and no different with DoorDash.

This is why I was so excited for DoorDash to come on stage and to share their story with you of how they moved to conversational over the last year again, I'll do my best to, to summarize that story for you today.

I need to look at my call disposition codes. I need to put that in a model and try to draw some inference between those and predict what my customers are going to want. And that's great. I love when customers are thinking big about predictive intent, but just like migrating to a conversational IVR experience, this is also an area where you can think big start small and iterate.

You know, as an example, the airline example that I had just gone through, right? I don't need machine learning for that. I know who the customer is. I know it's Mike and I know he's got a flight tomorrow if he's calling me 24 hours before the flight, I bet he's calling about that flight. Let's ask him.

Um also think about um being proactive to your customers and reaching out to them before they even call into the contact center or before they even know they have a problem, you know, when I was preparing for this presentation, um, it was the evening of Halloween, uh, and, uh, my, my lovely wife was on candy duty while I was in my office plugging away, uh, at this presentation and I have a, uh, a video doorbell. Right. So, for those of you that don't have it, typically, um, you know, you'll receive notifications if people are near your door or if they press the doorbell and on Halloween, I got quite a few of those while I was working on this presentation and I thought, wouldn't it be a great customer experience if the morning of Halloween, they just sent me a notification saying, hey, we know it's Halloween, you're going to get a lot of these, press this button to mute your notifications for the evening, being able to delight your customers and raise the bar for customer service.

Big things start small. Um start with a similar experience to what your customers are used to today and layer on the capabilities of Amazon Connect and Amazon Lex, like text to speech, dynamic prompting and personalization. Once you have a baseline um on, on the, on the new on Amazon Connect, introduce speech enabled intent detection at the top of your IVR and start gathering data on how your customers are going to be interacting with your bot.

Now, you might be thinking well, Mike, didn't you just tell us that? You know, there's a risk to that and it might risk our KPI performance and our, our um customer experience and yeah, you're, you're right. And the, what we like to see our customers do and what DoorDash has done is implement A/B testing. Right. Let's give 1% of our customers, this conversational experience. Monitor it very, very closely if it's working. Let's turn it up. If it's not working, let's turn it off. Review the data iterate and try again, which brings me to my next slide, iterate.

Follow the data and make data driven investments. When you have speech enabled intent detection at the top of your IVR, you're going to get two really key pieces of information. You're gonna get the utterance in how customers are talking to your bot and you're going to get the volume of what customers are asking for um within your IVR. And you can use that to help inform investment areas for more conversational experiences and conversational self-service monitor and meet the changing demands of your customers.

Continue driving personalized experiences that increase customer satisfaction and brand loyalty. Finally use data to inform your holistic CX improvement, your customer experience. You know, the data that you're collecting can not only be used to inform the future investments of your IVR, but also on the digital collateral and how your customers engage with you holistically as a brand.

So we we've talked about this mental model of, of migrating to a conversational experience. Um but this is really where the rubber meets the road. What makes a conversational IVR project successful, like most IT projects, it's the people doing the work and the process that they follow. So we're going to talk about people and more specifically roles that you want to have on your conversational IVR project. And we're not going to be talking about roles like project managers or dev ops or QA. Those are super important roles that absolutely have a place in your conversational IVR project. But we're going to focus on the roles that are unique and specific to your conversational IVR project and to illustrate some of the the points I'm gonna revisit a point that Jim made earlier and the IVR being kind of a laggard when we think about overall customer experience.

Um and because of that, you know, roles critical to this type of project may not be intuitive to all customers. So we're going to use a mobile application to help draw some parallels to why these roles are critical to the project. So when you build a public facing mobile application, one of the roles that are critical to that type of uh application is a user experience designer. They keep customers front of mind and provide an optimal experience for the customer in the mobile application. They understand the nature of how humans interact with graphical user interfaces they take your brand's style and voice into account. When incorporated, when when designing your mobile application, finally, they design the alignment of visual elements in an aesthetically pleasing way that allows customers to get what they need quickly.

Uh like a UX designer, a conversation designer keeps the customer front of mine and uh uh supports a optimal experience for your customer within the IVR. They understand how humans talk to things, they manage, they manage things like cognitive load by not giving your customers too many options, prompt efficiency by making sure the questions that we're asking, customers aren't too long or too short. They're perfect for what they need to advance the conversation. And they, they think about conversation repair and what happens when a customer says something that we're not expecting and how do we get that conversation back on track to get the customer where they need to go?

Um they also establish or adhere to a brand's voice. Now, when I say a brand's voice, I, I'm not necessarily talking about the text to speech, voice that you select within your Amazon Connect instance, I'm talking about your bot's personality. I'm talking about the vocabulary that your bot uses and the formal or informal way of speaking to your customer. You know, if I'm a financial services institution, I probably want a very matter of fact way of speaking to my customer in a formal manner. But if I'm a travel agency, maybe I want a more excited bubbly experience. That may be a little bit more informal for my customers. And yes, that's my excited bubbly voice. That's the best I got for you.

Um so when a bot personality is consistent, personable and familiar it signals to the user that your bot is capable in turn you gain trust capital with that customer and the customer is more likely to engage with your bot. Finally, the interface architecture design. So taking your use cases, the use cases that you want to enable within your IVR and converting that to a bot design and a conversation design, identify and handle contextual paths and inform or to bring in personalization. If you want to have a delightful customer experience with effective self-service and efficient IVR navigation. A conversational designer is critical to your conversational IVR project.

So going back to our example of drawing some parallels from from mobile applications. One of the other critical roles for a mobile application is a mobile application developer. And for the same reason, a mobile application developer is critical for mobile application development. An Amazon Lex developer is critical for conversational development on Amazon Lex. Both of these roles, bring speed and and expertise through their experience and understand the ecosystem of mobile development or on the Amazon Connect side, understand Amazon Connect and Amazon Lex and the ancillary services around the IVR. They identify libraries and understand best practices to expedite timelines and they look around corners for you to help avoid missteps.

They also ensure visibility into KPIs to help measure the performance of your application and operational monitoring, to identify and resolve issues that come up quickly and efficiently and also inform where to improve the application. So we've we've kind of covered the roles in which we deem critical for this type of conversational IVR project. Now, let's take a look at the process they should be following.

So when we think about an initial release for your conversational experience, there's really four major steps, there's discovery. So understanding the use cases that you want to be able to enable within your IVR, the data insights or KPIs that you want out of that type of use case and the backend APIs that need to be created or integrated with to enable that experience.

The second uh the second phase is uh conversation design. So having the conversation designer take the output from that first discovery step and architect a bot and a conversation um to provide an optimal experience for the customer, they'll also engage with people like your legal team or your marketing team as needed to help engineer prompts and make sure everything is adhering legally and to your brand marketing, then we have build test in tune.

So you know, this is where the Lex developer comes in and takes the architecture and the conversation architecture from the conversation designer and implements it into the bot after they build the bot, you know, we need test sets, we need to be able to test the bot. So they develop test sets and then based on the results of those test sets, tune the bot as needed. And this is a cycle of build test two, build, test two until you're comfortable with the results. And you finally deploy again, taking into consideration deployment strategies like A/B testing to or, or, or, you know, if it's possible for, for your organization have employees test first right to mitigate the risk to your customer experience.

So now we have our first bot, but we're not quite done yet. These steps are part of a larger picture. These steps are part of a continuous integration cycle. So the next step in this cycle would be to monitor and understand how your customers are engaging with your bot. This is gonna inform two different things. The first is being able to identify areas where we need to tune the bot and enhance the ability for the bot to understand our customers. The second is going to be identifying areas or use cases that we don't support quite yet and that we can then go through the discovery phase and begin the cycle to bring that use case to fruition.

Finally, as your organization comes out with new products and services work closely with your product management team to understand the needs of your customers within the IVR. This process is critical to rapidly evolving your IVR and should be considered foundational work on day one of your conversational IVR project.

So we've talked about the mental model of how we, we should be thinking about migrating to a conversational experience. We've talked about the roles that we need on our conversational IVR project and we've talked about the process that they should follow. But how do we get started?

You can get started by understanding your goals and scope of the project and identifying if you have the right resources for the project, what we recommend is to reach out to your AWS account team and inform them of your plans and goals and you know, as part of that as well, we can help inform where a conversational designer is needed. There's a little bit of nuance in that, right? For our initial migration. Like if you're migrating a touchtone IVR to Amazon, Connect, you likely don't need a conversation designer. At the very beginning, you'll need them when you start experimenting with conversation, conversational IVR. But if you are migrating from one of those um highly tuned directed dialogue type of IVRs, there may be areas where you want to bring in a conversational designer a little bit early uh to make sure that we're providing a great customer experience.

Second, participate in a light discovery exercise. So we've worked with our AWS Professional Services branch to identify a questionnaire of sorts to, to walk you through and answer questions and which the output of is a high-level understanding of what your scope is going to be on your project. Um this isn't meant to be a, a replacement for discovery as an example, but it can do two things for you. One, it can just open your eyes and to see what the actual scope of this project is going to look like and make sure that you're taking all the things you need to take into consideration. And then two, it'll also help you have conversations with our AWS Professional Services or our AWS Partner Network partners to see if you need to start augmenting your staff. And that's a good way to start having those conversations with those folks.

So before we wrap up, I just wanted to cover uh some of the Amazon Connect launches that we've gone generally available with. Um this week. The first here is generative AI on Lex and Marcelo is going to be coming up in just a minute to talk through those in more depth. But we also came out with Amazon Q and Connect, which allows for generative AI to generate a response based on a customer inquiry and use it to have the agent respond to them, saving the agent time by not having to look up the information manually.

"We also came out with two out of the box channels with Amazon Connect to help meet your customer where they are in a convenient way. Those channels are in-app messaging or sorry in app and web calling.

And two way SMS messaging along with the in-app web calling, we also enabled video conversations. So being able to identify conversations that would benefit from maybe having a more intimate channel by turning on video and help build trust and relationships with your customers.

So without further ado, it's my pleasure to invite Marcelo Silva to the stage to explore some of the gen AI capabilities now generally available with Amazon Lex.

Thanks Mike. Ok. So here am I sitting between you and the rest of your day or having tons of fun tonight? So let's get this going.

Um as Mike talked about, I'm a product manager with Amazon Lex and we released a set of features that take large language models or generative AI capabilities into Amazon Lex to make the experience both of the builder as well as the end user a much better experience than we have before. So let's go through these experiences.

I, you're going to see that not only I'm talking about things that we are doing today, but let's do a quick recap. There you go. Of what we have built. Since the inception of Lex, Lex was launched in 2017, we launched Lex V2 APIs which provides the streaming capabilities into Amazon Lex.

We enhance the IVR functionality of Lex with things like wait and continue and a variety of other features and functionality that enhance that IVR experience for the customer runtime, hands and a variety of other uh artifacts.

We have 25 plus languages that were released in Lights and available today. We also trying to get a better building experience with Visual Conversational Builder, which now allows you to have a drag and drop, drop experience much like in Connect Flow in Amazon Lex to build your IVR experience.

And this year we released Test Workbench which allows you to create those test sets that Mike talked about. And within Amazon Connect, create the A/B testing experience or use your transcripts of the conversations between the users and the agents in Conversational Logs and have external test sets that can run and you can then identify regressions on your design or how your design is doing better than before.

And now we also have analytics right off the box which gives you conversational performance as well as the overall performance of the bot. You can see on the console, the missed utterances, the path that the conversation takes between the customers and the bot.

So on Monday, we announced uh four features that add generative AI or large language models into Amazon Lex. 12 sets of features that empower the developers to have a better developer experience. We'll talk about those first and then two sets of features that help with the inference of that conversation between the human and the bot which we call end user experience.

So let's talk about the two developer experience features. We have Descriptive Bot Builder which uses natural language description to build a bond. So in Lex, you have the concept of intents, you have the concepts of slots and utterances and that's how you tune a model that eventually creates the experience that you are developing in your bot definition, whether you know or not, you all been dealing with large language models for the longest time.

If you've been touching legs, whenever through this inception of 2017 to now, you are dealing with a large language model. Lex uses a pre trained unlabeled model, not at the billions and trillions of parameters that we have today, but in the hundreds of millions of parameters much like you have today.

And even though you know, a year from uh last year is when your families actually figure out what to do and they started giving you a, you know, advice about the conversational experience because now they are all generative AI subject matter experts and they know how chat works and all this kind of stuff you've been dealing with this for the longest time, Lex has always been based on a large language model that uses your bot definition to fine tune that experience that you give to your customers, right?

So Utterance Generator, it's another piece of curating data or training data or utterance data that goes into the process of building a bot. So let's quickly take a look at both of them.

So in Descriptive Bot Building, if you go to the console, you say you want to create a new bot. Instead of taking a bot from a template or starting from scratch, you give a description, I wanna build a uh travel bot that's gonna book hotels and I need check in date, number of nights and number of guests.

Um and I wanna make sure that there is a confirmation at the end with xyz based on that description, Amazon Lex generates a bot definition with intents and slots that you can then edit, modify or accept as they are. And your bot is there for you to now start integrating as Mike mentioned with the back end systems and create the business logic to create the type of conversational experience that you want to deliver to your customers.

So this process with the script of bot building has taken what has been a creating slots, generating utterances, generating slots in thinking about the conversational experience from weeks and months to a matter of minutes. You still need the the people aspect of the conversational designers of the Amazon Lex developer to create the overall experience that you envision for your end customers.

However, to bootstrap that descriptive bot builder gives you a leg up at the same time as you identify new intents or new workflows that you wanna build into your, into your bot your existing bots. For example, you now can use utterance generation to curate the training data for a particular intent.

So if you have an intent, you will be able to press a button and generate five utterances at a time based on the description of your bot and the name of the intent. So with utterance generation, every single time that you click that button, we take into consideration what you have accepted what you have added and we generate new utterances based on that.

Now, the end user experience. So let me go quickly into what's called assisted slot resolution. But before I go into assisted slot resolution, let's define what's a slot resolution in Amazon Lex, you take inputs either in terms of classifying intent on the utterances that the customer says as my colleagues mentioned, how can I help you? And you go like I want to book a hotel, the utterance of booking a hotel does what's called intent, classification. You identify an intent and within that intent to fulfill that intent or that action of the customer.

If you're doing a self service experience, you need to collect inputs. Those inputs are called slots, those inputs require to be translated or normalized to APIs into data types. So which means when you say uh I wanna travel, I wanna travel tomorrow tomorrow. It's not something the machine is gonna understand, but the machine can resolve that tomorrow into December 2nd on a specific format that Lex expects to share via APIs to external entities.

Same thing when you say how many people are traveling and you say four and you wanna transcribe that as a number, right? The concept of taking artifacts or what we call entities on the slot and normalizing them to specific values or data types is the concept of slot resolution.

So what does, what is gen AI assisted slot resolution? You enable the feature on a slot by slot basis, then you use your first NL inference to try to identify the lot we're not going to use JAI all the time because it's not needed. If the if you're trying to collect from the customer, how many guests are traveling? And the customer says four, I don't need an LM to resolve four to the number four.

But if the customer uses different type of language and say, you know, I'm traveling last week, next week and uh I'm really excited, I'm traveling with my wife and two kids. Most of the models that are intent based today or roughly all the models that are intent based today cannot reason that that's for people.

However, an LLM is super capable of doing these kind of techniques and understanding and resolving that sentence in terms of myself, my wife and two kids to be resolved to a data type of number four. So that's what slot as uh JAI assisted slot resolution does. It's gonna take that utterance of the customer on a very natural language and translate to a particular data type.

The last uh feature that we released is conversational FAQ. What we call Q and A intent allows you to handle queries outside of your predefined intent. So you build your bot, you create your intent, you create your dialogue, management and strategy. But at times, customers ask questions much like Mike mentioned or even uh more timely the way Jim talked about where we were in the old days of building web applications.

What did you do when you build a web app in the nineties or early two thousands, you start building a web page, then you figure out, hey, maybe I need some Q and A, maybe someone always asks about my uh baggage policy. So I go in there and add it in there so people can have it.

Same thing happens in, in a bot. You don't, we don't know and we don't want to build every single intent possible known to men, but we want to allow the customers to go away from the predefined intent within the realm of your business.

I don't want to ask you a travel company and your airline company as Mike mentioned and I call and say, hey, how many bags can I take? And I don't want the LLM to say, hey, based on your fare, you can take one bag. But if you had booked Delta Airlines you could take two bags, right? If I'm United, I'm not happy about that.

So you want to make sure that the content in which you allow the c the, the, the responses to go outside of the predefined intent are scoped. And that's what RAG does Retrieval Augmented Generation allows you to take your own content, curate the content and actually use the contents to provide answers to your customers based on that content outside of your pre trainin data.

For example, you're in the middle of a booking a reservation and you say, you know, um I'm gonna stay three days, but how many bags can I take? So the bot could fetch that data from uh an FAQ. Go back and say yeah, you can take two bags and you continue. Ok? I wanna book five days now. And so and so, right, the concept of conversational uh uh FAQ as well as the Q and A intent is to create a fallback before the fallback where you can use your knowledge base to provide an answer to the customer where they can go back to the direct dialogue that you are creating with your intent based applications today.

It works with knowledge base like Amazon um Bedrock or Knowledge Base for Bedrock, Amazon Kendra and OpenSearch. And now with the announcement of Q and Connect, we'll be working to release that knowledge base as part of our conversation of FAQ. Again, it's super simple. You go in there, you identify the model that you want, you pick the knowledge source that you, that you are looking for and we build everything for you.

The concept of RAG allows you to create vector databases or embeddings to actually respond to questions what's called generator and a responder and what we do, we do everything for you. We create the back end system, we create these embeddings, we create the indexes associated with it and we give you the answers without you having to do anything in the back end or interact with any other system. You just have to put your data into an S3 bucket.

Today is the only location that we have or Kindra index or a knowledge base on, on Bedrock and we do it all. So a typical example would be like this if you're in the middle of a conversation and you say what are the different amenities between Miami and Orlando assuming you're booking a hotel and you want a particular brand XYZ brand of a hotel, right? That data could be in a knowledge base into multiple documents. Could say the Miami information has these kind of amenities that Orlando has these kind of amenities.

What REG is going to do for you is not only fetch that data with embeddings, but then it is going to summarize the diff between the two and you're going to get that kind of information saying by the way, any in Miami has 24 hour pool and the other one has whatever different hours, right? Whatever the kind of information that you have in these knowledge sources, not only you faction that information, but you providing that information to the customer in a way that's consumable, summarized and and natural in the same thing.

The contextual based information that exists is maintain. When on the second question i say which location is best for kids is not looking at all the properties that any hotel has. It's looking between Miami and Orlando and it's looking at the description of the properties and identify Orlando has a kids club and all these other things, they are more suitable for kids. And that's why it's providing that particular answer.

At the same time as we talked about, you are in the middle of a self-service workflow and in the middle of the self-service workflow you um are booking a hotel and um you ask a question that goes off script with meaning outside of your pre uh pre trained intent. So you can leave to the Q and A intent. Do the work of providing the information to the customer in using a lambda function. You can easily come back to exactly the point on the conversation that you left off on that particular intent, collecting the information that you're looking for.

The important thing to realize is that all these different interactions continue to be contextual in conversations. It is extremely important to maintain the context. In these two examples, all the interactions on the second time around had context because that's how humans talk. They don't remember to tell you, hey, can you tell me the amenities which, which of the, which of the Orlando and Miami properties are best for kids? They just say which one is best for kids. So the context is important with that.

I wanna thank you for being here with us. We truly appreciate and our honor to have you listen to us for an hour and enjoy the rest of the conference. Thank you."

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值