Principal Financial enhances CX using call analytics and generative AI

Good afternoon, everyone. Thank you so much for joining us today. My name is Atika Sardana Chandra and I am a senior product marketing manager for Amazon Bedrock.

So why are we here today in this new era of generative AI? We thought we should discuss how we can impact customer experience, elevate customer experience use in the contact center space using AI as well as generative AI. We are going to focus on how you can use all the data that you've collected through your customer conversations or other customer contacts and use it to derive actionable insights to improve performance and boost your business.

Joining me today are my colleague, Chris Lot and our customer Miguel Sanchez, everybody.

My name is Chris. I'm a senior solutions architect for Amazon. Transcribe.

Good afternoon. I'm Miguel Sanchez. I am an analytics director and regional chief data officer at Principal Financial Group.

Thats all.

So we do have a fully packed agenda for the session. today. We are going to start with the key contact center challenges, the personas and what their day to day life looks like how you can use AI to form certain contact center solutions to alleviate those challenges, the benefits by some of our customers and then move on to some of the latest innovations of generative AI and how Principal Financial Group has used these solutions for using AWS services to form their post call analytic solutions to derive insights from all their customer conversations.

So first of all, I want to, you know, start with three different personas like i mentioned, the first and the most important persona is our customers with a show of hands. How many of you would like to spend 10 minutes on a call? Press one for this press two for this press three for this, no one, right? Our customers don't like it either. More than 80% of customers today want self service solutions. They want a chatbot or a conversational assistant to be able to solve their challenges.

Second, our agents, they are the face of the company's customer service department in the last couple of years. You all will agree that the contact centers have been overburdened with calls with days like today, like a cyber monday where people are sitting home and ordering things. They want to call those contact centers and solve all their problems and 30% of the time instead of being able to focus on calls, these agents are spending time in admin jobs. That's what we've heard from our customers. So we need to solve that problem.

And lastly managers and supervisors, you're collecting data day in and day out. But are you able to analyze those calls? Our customers tell us not all of them. So we want to empower customers to be able to analyze 100% of their calls, which is why we have the three solutions mapped to those three challenges.

The first one, as i was saying, the cell service virtual agents, your conversational ivr your chat bots, they are boosted using generative AI and attached to the same knowledge base that are used by company agents so that customers can find answers when they want at a time that's convenient to them.

The second solution for the challenges related to agents happens while a conversation is still going on the real time call analytics and agent assist solution. So this is the holy grail of understanding, you know what the customer wants when they are, you know, really troubled. So you are able to pick up insights like an ongoing call sentiment. The agents are empowered to find answers faster because they have prompts coming to them, giving them responses of that ongoing conversation. So they are more focused, the answers are given faster, the call resolution time goes down and of course, the customers are happy

Last but not the least. And something we'll focus a lot on in today's presentation is conversation analytics. Like i was saying, you have millions of calls in a year, but are you able to analyze them? We have a post call analytics solution that lets you do just that so that you can derive insights like overall call sentiment. How are your agents performing? What are the upcoming business trends? What are the top things your customers are complaining about or maybe what are the top things they are happy about so that you can double down on those things and boost your business performance?

So let's look at what some of our customers have already seen in terms of the benefits starting with W Bank, which used the self service conversational AI platform provided by AWS. They saw a 90% reduction in time that customers spent on a simple call like a balance inquiry. It went down from 4.5 minutes to 28 seconds. That's like huge and 30% of their calls are now contained using these cell service solutions.

Moving on to Magellan Health, which is using the real time call and it takes an agent assist solution. They brought down the agent training time by 3 to 5 days. And though it, you know, sounds like a small number, they started saving 9 to 15 seconds per call, but over 2.2 million calls per year, they saved about 4400 hours. You can do the math by multiplying the agent salary and other operational costs.

Then for our post call analytic solution, we have two customers State Auto Insurance that saved about $800,000 in operational expenses because of being able to analyze all of their calls. And TSB Bank, they were able to analyze 5 million calls in a year. They were analyzing about only 10 to 12% and they move to 100% call analysis, which helped them identify over 800 call intents. The reason why their customers were calling, which help them improve customer experience because they were able to transfer their calls to the right agent that mapped to the intent of the call.

And this is my favorite slide because it shows the sweat and blood that the team has put in, in making our customers happy and trust us with contact center solutions. AWS contact center solutions are horizontal, no matter what industry you belong to. I'm sure we can help you solve your challenges by introducing AI and generative AI into your contact centers.

And with that, I'll hand over to my colleague, Chris.

Thanks Atika. So to get started quickly with call analytics and generative AI on AWS. We have two flexible options. The first one is Amazon Connect. It's our contact center solution that allows customers of any size to get started with a contact center and provide superior customer experience for those that are unable to move to Amazon Connect. For example, if you have a custom solution that you've already built or you're locked into a contact center vendor, we have what's called the AWS CCI solutions. These are example APIs and code that allow you to get started on AWS no matter the contact center platform.

Now, regardless of which way you go, they're both powered by AWS language AI services such as Amazon Transcribe to go from speech to text and we use generative AI such as Amazon Bedrock. Now, the AWS CCI solutions cover the three use cases that Atika mentioned earlier, self-service chat bots real time agent assist and conversational analytics. The CCI solutions support many different contact sensors such as 8x8 Cisco Avaya and others. And they do this by using industry standard file formats and protocols such as wave files, mp3s and Cipro.

Today, we're going to be focusing on a solution which is called Post Call Analytics. And Miguel is going to dive into the details of how Principal Financial Group is using Post Call Analytics. But all these solutions are open source, which means that you have access to all the code and can get started quickly in building your solutions at the end of the session, we're going to show all the resources that are available to you. But one that's pretty easy to remember that i put up there is amazon.com/postcallanalytics.

When building Post Call Analytics, we work backwards from customer challenges that we heard from customers with contact centers. For example, lack of insights into why customers are calling or challenges in being able to evaluate how their agents are performing. So we used AWS language AI services such as Amazon, Transcribe to generate call summaries. We use Amazon Comprehend to do call analytics and generate conversational insights. And we do call summarization and other generative AI tasks with Amazon Bedrock. And the result is being able to discover key business trends and insights into identifying root causes why your customers are calling and improving agent productivity.

So now let's dive into the architecture and see how Post Call Analytics works. So first, like i mentioned, it starts with standard audio formats that are uploaded to AWS in an S3 bucket. From here. A lambda function is triggered that starts a step function workflow that will merge all those language AI services together and use them to generate those insights. Additionally, we use Amazon Bedrock to go deeper into doing things like identifying topics and identifying action items that your agents have to perform at the end of the call. We take all of this data, the transcription, the insights and we store them in DynamoDB and Amazon S3.

Now it's important to note that what we're doing here is building a data lake of all of those insights. Now we provide two different ways to access those insights. The first one is Post Call Analytics contains a React based user interface that's hosted in S3 and Cloudfront that allow you to get started building your own user interfaces on top of PCA and again, all of that source code is available for you open source on GitHub.

Additionally, like i mentioned, because we have this data lake in S3, we can write SQL queries using Amazon Athena and build aggregated insights. And with that, we can also build dashboards with Amazon QuickSight.

Now, finally, I should call out that Post Call Analytics actually has a sister solution called Live Call Analytics. And that's powered by Amazon Chime SDK Voice Connector. With this, we can analyze the calls in real time as they're happening. And the benefit of that. Of course, like Atika mentioned is we can provide agent assists. So for example, suggested responses or completing tasks in real time.

So now i am proud to announce a few new features of Amazon Transcribe. We've been seeing that in order to effectively leverage generative AI, customers are increasingly looking to increase accuracy and language support. So today, I'm excited to announce a launch of a new multi billion parameter speech foundation model that powers Amazon Transcribe that supports over 100 locals. This multibillion parameter model is trained using the best in class supervision approach and it learns the inherent patterns of universal speech and accents across millions of hours of unlabeled audio data. This speech foundation model provides a 30% relative accuracy improvement across all the locales and it also enhances the readability with more accurate punctuation and capitalization. The model provides expanded support for different accents, noisy environments and other acoustic conditions. And it supports the many features that we love about Amazon Transcribe, for example, automatic language identification and speaker diarization.

The second announcement today which is available as preview is call summarization as part of the Transcribed Call Analytics API. So now with one single API call, you can transcribe the call, generate insights such as issues, action items and outcomes and sentiment and get a call summary again all with one single API call. It optionally allows for redaction, not of just the transcript, but also the summary.

And with that, i want to turn it over to Miguel Sanchez, chief regional data officer of Principal Financial Group. And he's going to talk about how Principal takes advantage of Post Call Analytics and generative AI on AWS.

Thank you, Chris. I'm so happy and glad to be here. sharing our journey. I would like to take the 1st 30 seconds to honor someone, my cousin Daniel Orozco Sanchez, who used to work for AWS and who passed away about a year ago. We both had a dream to be presenting here a re:Invent event. So here we are. This is for Daniel.

I'm going to be walking you through providing some context on who we are, why we are working with AWS and specifically dealing and working with the AWS Post Call Analytics framework. Also, I'll be sharing the approach that we still are having for deployment purposes, sharing the road map that we are gonna be facing for 2024. And also I'm gonna be sharing some demos demonstration on the PCA console with the latest features that Chris was referring to summarization. And also I'm gonna be sharing another really important topic for us. It is the topic hierarchy definition that we assembled together with our business stakeholders.

So let us go to who we are. So basically the Principal Financial Group, it's an established financial services firm with more than 100 and 40 years in the market. We are a global investment management leader and serve more than 62 million customers around the world. Right now. We are accountable in managing around 635 billion assets under management and related with engagement centers. It is worth to mention that we are processing around 30,000 customer calls on a daily basis, supported on more than 1500 engagement centers. The average call time is eight minutes. The average speed to answer is 51 seconds with callers waiting less than a minute to talk to an available agent.

We are facing a real challenge and we had to look for alternatives to improve, not only our engagement center operation but also to improve the customer experience, AWS and PCA for Principal. There is a strategic definition behind the scenes. We set an aggressive goal to migrate all the applications and data points to the cloud. By the end of 2026 AWS is our strategic partner for that journey. But in addition to that, i am proudly leading a language AI team, we had the chance to benchmark each one of the components that are embedded within the PCA framework.

So we run a benchmark comparing with another solutions offered in the industry. We also validated that the AWS is following the enterprise architecture definitions. We found very high accuracy on one component that Chris was referring to previously TCA Transcribed Call Analytics. To be honest with you, this is a unique component that we didn't find in any other offering. Basically, it's the combination of transcription with data mining and now it is getting infused by gen AI. This is a unique component that was, you know, part of the rationale that we used to define to be working with PCA and the last, the last but not the least one is the access to subject matter experts and product owners.

We are so pleased to have the support from people like Chris and Atika to be working with us even debugging code and deploying the platform. We have created a great partnership with AWS for this specific journey where we are at right now.

We started with PCA about a year and a half ago. Once we selected the platform, we established a nice partnership with AWS supported on two specific programs. The first one, Architect Resident program and the second one, something called ADELA. So basically, my language AI team partnered with AWS supported on these programs and we were able to refine and personalize the PCA framework, we were able to deploy it. And then i am very proud to say that today, we have been able to process more than 1 million calls

PCA has proven to be successful and we are using it in multiple use cases while actively improving scaling and evaluating with product managers, customer experience, consultants and servicing leaders.

There is another important topic that I would like to refer to. It is an open source framework. We've got a lot of flexibility to incorporate additional components and additional channels. Right now, we are bringing the customer email into action as a part of the PCA framework and of course, with all these announcements, now we are relying on Bedrock for multiple purposes. I'm gonna be providing more details about it.

Now, I'm gonna be pointing to the requirement that we receive. Basically, this is the business requirement. This is the challenge I already mentioned that we are dealing with a lot of customer voice interactions and we were looking to enable conversational analytics. But this is related with the Voice of Customer program. Basically, we were told you need to find something out there, an IT platform that it is gonna be basically dealing with unstructured and unsolicited data following some specific business rules.

Those rules are:

  • Listen basically to provide the ability to capture data from multiple data sources,
  • Interpret synthesize data for actionable insights
  • Act implement enhancements to improve outcomes
  • Monitor, which is quantify the performance of customer experience efforts and
  • Last govern align commit and prioritize.

So those are the principles that were defined by our business stakeholders, the the Voice of Customer program with that definition and that business requirement.

Basically, we define the approach how we are gonna be moving with PCA. Once we selected the platform and created this partnership with AWS, we um define three main phases.

The first phase was for technical deployment. And in there, basically, we were dealing with the specific MVPs and activities related with transcription. I'm gonna be providing more details, but we are dealing with Genesis Cloud as our engagement center platform. So we are pulling data from our Genesis Cloud platform and we are going through PCA. So the first, the first step is to go through AWS, Transcribe and get transcripts, high quality transcripts.

We were able to provide sentiment analysis, topic and intent, identification, PI I reduction and obfuscation. This is where the beauty of TCA is playing a key role. In here, we are a high regulated industry and we cannot expose our data to everyone. So PI I it's a big, big deal for us. So PI I was also considered for phase one and reporting.

It is also worth to mention that we are relying on AWS QuickSight. And one particular feature called Q, which is the NLP feature.

For phase two, we define the topic hierarchy definition. This is tailor made, this is something that we created internally. This is something that we refine with our business stakeholders. Basically, we are relying on PCA data points and supported by Bedrock. We were able to create our own taxonomy, topic taxonomy definition. So this is something that we released and this is a video that I'm gonna be sharing with you.

The second one was customer intent. The customer intent is going to be playing a key role, not only for voice of customer but for the customer experience. Because with the customer intent, we are gonna be able to determine the why, why is the customer calling us? And that's why basically it's gonna be a foundational piece for another initiative that was triggered by PCA which is Virtual Assistance Legs. We are gonna be deploying AWS Legs.

And the last two, basically, it is related with reporting enhancements and additions because we realize that considering that we have got the voice interactions. Now let's bring additional channels into the equation. The last one is related with virtual assistants. I was uh already mentioning about this. We are looking to deploy AWS Legs and the PCA data is being used to create multiple um conversational purposes.

There is another functionality that we have got in there and that is relying on the topic hierarchy definition. Basically, it is the emerging theme detection with that feature we are able to detect if there is something getting important or perhaps something that it is creating friction within the customer experience. And there is another component here, I would like to point uh because it is related with JAI it is moral retraining. I'm gonna be more specific on model retraining because it is not related with Bedrock. This is related with another feature provided by AWS called SageMaker JumpStart. So for some specific and internal definitions, we are working with the small pre trained models and that's where we are pointing to model retraining.

So that's, that's basically the approach that we are following for the PCA deployment I already mentioned that um once we discovered the power that we had with PCA, our analysts and leaders were pointing to create a holistic view on customer interactions. So the first definition was let's bring customer email interaction and we are gonna be bringing more and more channels right now, we are working, trying to glue the email and voice interaction together and we are relying on a graph database approach working and dealing with another AWS component called Neptune. So we are gluing all those interactions and now looking to bring customer surveys, social media interaction and digital interaction.

Our goal as a company basically is provide a comprehensive perspective on multi channel, customer engagement how the PCA framework was implemented. I already mentioned that we are integrated with Genesis Cloud. We are ingesting data on a daily basis for some specific use. It is also worth to mention that we were not following a big bank approach. Initially, we were dealing with some specific business domains. This is also important, this is not an IT only related initiative, this is a business initiative. So the first business domain that we were working and dealing with was money related with money out and then we were moving on to money in. So we were bringing data and we're still bringing data on a daily basis from Genesis Cloud.

The data, the raw data is being saved on an Amazon S3 bucket and relying on AWS Transcribe. Basically, we are creating metadata and some basic uh KPIs related with the calls. All that information is being exposed in JSON format and it is being consumed using QuickSight from there. The workflow will be pointing to use, Comprehend and Comprehend. Basically, it's gonna be related with TCA we comprehend and transcribe. We are able to identify topics, intense issues, takeaways, sentiment analysis. And now we've got Bedrock. So with Bedrock, which is the next step within our workflow, we are able to get cold summaries and also we can have gen AI queries, which is another cool feature that will you will be seeing on, on the video.

There is another component in here. Translate PCA was deployed for uh our US market but it was also deployed in Mexico. So at some point, we are foreseeing the need to be sharing some of that information that we were getting those topics. That topic hierarchy definition that we created here perhaps can be extrapolated for another uh member company, the last component Candra and I would like to highlight Candra because Candra is playing a key role on another initiative that we are considering within our roadmap can rise. The ElasticSearch component that will allow us to be looking for some specific keyword that was mentioned within a customer interaction. And we can go and search for that specific keyword. And Kendra will be offering a rank of all the multiple options where that specific keyword was found. We can go ahead click on it and we can even listen to the conversation. It is not only listening because PI I is there, it's part of TCA. So the conversation is redacted. So if within the conversation of social security number is mentioned, you will see, you will hear my social security number is be be be be because it's uh redacted and sated.

The information is being consumed I already mentioned about QuickSight and also through the PCA console that it is going to be part of the demo. So this is the new PCA console. We created this video relying on real data. So you will be seeing real data. Of course, it is being redacted for this presentation and I make sure that it is running.

Ok, let's run. So the PCA console provides um details, call details. I don't know if it's running or not. Yeah, it's running um including call metadata, Q name, agent, name, call duration, agent and sentiment trends. It also provides transcribed details and speaker time for all the stakeholders in involved in the voice interaction. There is a new functionality that was was going through to check tone, loudness and sentiment, which is very useful to determine how effective the interaction was. And now in there, you will see that new cool functionality PCA now hosts a live gen AI query on the call details page, enabling users to ask questions in real time such as how could the agent have done better? Did the agents show empathy and additional summarization and identification tasks? This is something that we release no more than a couple of months ago and it's creating a lot of impact and good, good feedback from our business stakeholders.

The next video it's gonna be related with the topic hierarchy definition. These functionalities aim to help our business stakeholders to detect trend topics drill down and get details on specifics allowing proactive actions to improve the customer experience. This is extremely important as I mentioned before. This is something that we created in house. We are relying on PCA data points and also supported on Bedrock specifically on cloth instant. Yeah, it's running. So the report is built on AWS QuickSight providing NLP functionality supported on Quick Q. The report allows to filter by specific data ranges and engagement center cues providing specific KPIs like the number of calls, average talk time and call duration. We have defined three levels within the hierarchy and providing a summary for each one of them. This has been a very detailed and refined initiative, partnering with our business stakeholders to include business relevant topics, clustering, clustering the outcomes provided by Bedrock. Finally, we created a timeline analysis considering the number of calls per day pointing to some specific topic.

Ok. Now I'm gonna be moving to one of my favorite slides a principle as we think about understanding the customer experience. Our goal is simple. Ultimately, we want to deliver simplified personalized and anticipatory customer experience that build a feeling of security when customers interact with us using their preferred channel. In this case, either email or voice. Extremely important because the cornerstone for our customer experience now it's PCA the voice interaction, the richness that we may get on all that information. And we are now combining that with email interaction. For us. It's extremely important to be dealing with the what, what are the customers talking about? That's going to be the topics but also extremely important to be relying on the why, why are the customers calling? Why are the customers emailing us, right? Topics and intents are the way that we are using to connect different channels. So now I already explained to you that we are relying on Neptune for this specific purpose, but we are able to find hidden relationships and customer experience using or interacting through multiple channels. So the omni channel experience, it is extremely important for us and that is exactly where we are moving on.

I am going to be detailing some of those activities within the road map that we are going to be facing. Next year. We are looking to continue our partnership with AWS executing on a very exciting road map. So for phase one which is already in production, we were using post call analytics enabling P reduction topping hierarchy definition and summarization to date, we have processed over 1 million calls from multiple contact center queues um that has provided deep insight into the customer content of calls with enhanced VI capabilities. We are relying on large language models to gather additional customer insights.

For phase two, I already mentioned about email interaction. We are already processing email interaction and we are looking to get additional uh integrations with Google Analytics because that's the platform that we are using for attacking strategy and bringing that digital uh interaction into this equation. And of course, we are um looking to improve our top hierarchy definition considering new business domains.

For phase three. This is I would say extremely strategic for us right now because considering the substantial progress that we have um had with PCA and although the end different data points that we are able to process, we said we need to enable an intelligent agent, basically relying on PCA data but eventually can be complemented by additional knowledge basis. So now we are working with AWS to deploy intelligent agents supported on AWS Q and A bot infused by AWS Bedrock. And of course using Kendra as a pivotal platform. Why am I pointing to Kendra because we are gonna be facing or we are facing a rock approach. It is a retrieval augmented generation pointing to the PCA data complemented by additional knowledge base Q and A bot is providing the functionality to create our own knowledge bases or to point to a pre existing ones. The user interface is going to be a chatbot like interface, but it is going to be circumscribed to customer omni channel interaction. So that's the challenge that we, I would say it is a challenge, but it is gonna be really, really exciting road map that we are gonna be facing for next year.

So or thank you so much. All right. Next steps, if you want to get in touch with us, you can ask us for a discovery workshop or starting a proof of concept. You can work with the different contact center platform providers that Chris showed with contact center intelligence solutions or you can reach out to us to know more about the Amazon Connect solution. You can work with our AWS experts, the proserve team, our long list of CCI partners, consulting partners and ISS before we let you go, we do want to leave you with some resources which will help you understand more about all the solutions that we just spoke about. And if you want to know more about AIML in contact centers, we do have an interesting list of sessions lined up for the rest of the week that you can attend workshops, choc talks and other breakout sessions. This was all of us do remember to fill in the survey. Give us your feedback. That's always very helpful and we'll open it up for questions. I'm happy to come to you or you can stand up and shout at the top of your voice to ask all your questions.

这个错误是因为您正在尝试删除拥有模式(schema)的数据库主体(database principal)。在 SQL Server 中,模式是一种逻辑容器,可以用来组织和管理数据库对象,如表、视图、存储过程等。每个数据库主体都可以拥有一个或多个模式。如果您尝试删除一个拥有模式的数据库主体,将会收到如上所述的错误消息。 要解决这个问题,您需要按照以下步骤操作: 1. 首先,使用以下语句查找拥有模式的数据库主体的名称和模式的名称: ``` USE [database_name]; SELECT s.name AS schema_name, dp.name AS principal_name FROM sys.schemas s INNER JOIN sys.database_principals dp ON dp.principal_id = s.principal_id WHERE dp.name = 'principal_name'; ``` 请将 "database_name" 替换为您的数据库名称,将 "principal_name" 替换为您要删除的数据库主体名称。 2. 然后,将该数据库主体从其模式中移除。例如,以下语句将名为 "principal_name" 的数据库主体从名为 "schema_name" 的模式中移除: ``` USE [database_name]; ALTER AUTHORIZATION ON SCHEMA::[schema_name] TO dbo; ``` 请将 "database_name" 替换为您的数据库名称,将 "schema_name" 替换为上一步中找到的模式名称。 3. 最后,您可以使用 DROP USER 或 DROP LOGIN 语句删除该数据库主体。例如,以下语句将名为 "principal_name" 的数据库主体删除: ``` USE [database_name]; DROP USER [principal_name]; ``` 请将 "database_name" 替换为您的数据库名称,将 "principal_name" 替换为要删除的数据库主体名称。 请注意,在执行任何删除操作之前,请务必备份您的数据库以防止数据丢失。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值