Build generative AI apps that can perform tasks with Amazon Bedrock

Hello, everyone. Welcome to this session. Thank you for joining us today. My name is Harshal Pal. I'm a Senior Manager Product with Amazon Bedrock. We'll be talking about Amazon Bedrock and agents in particular. Thank you once again for joining us, making the trip to Mandalay Bay, finding all the traffic, really appreciate it.

So joining me today is Mark Roy, a partner in crime, Principal Machine Learning Architect at AWS. And our guest speaker is Sean, CTO of Athen Holdings. Thank you, Sean for making it to the session.

Before we jump in, I want to give you a quick overview of Bedrock. There's quite a few announcements that we had over the last couple of days - exciting times for generative AI in general, Bedrock in particular. With Bedrock, we want to make it really easy for you to build and scale generative AI applications. And with that goal, we have provided capabilities at each of the three layers as I call them - across choice of models with the launch of a few models, but then also evaluation of these models.

Second, we've provided new customization capabilities with fine tuning, continued pre-training and also retrieval augmented generation an API to simplify the RAG capabilities. The focus of today's discussion is going to be integration which is really the third layer up across the choice of models, customization and then integration. So we're gonna focus on agents.

Agents as you might be familiar with, help you extend FMs or foundation models to actually doing things. So we're gonna talk about agents, use cases and how Athen has used agents within their environment.

So roughly the agenda is:

  • A brief intro
  • Mark is going to talk about the orchestration and structure of agents
  • We want to have a demo
  • We'll have Sean talk about how Athene used agents in their environment

So that's really the next few minutes. We have a packed agenda. We do want to leave time for Q&A just in case we don't get through all the questions. Mark and I will be here after the session to help you with answer some of the questions that you may have.

So let's, with that, let's jump in. Foundation models are great at interacting with users, all of us are very aware of the conversational capabilities. You can even customize them to look up information and respond to user queries with your company data sources, right?

But where agents come in is to help you actually extend this capability and have foundation models do things for you. And this is through invoking APIs but then also looking up information and augmenting this to complete user requested tasks.

The automation challenges that exist today as you try to achieve this is you need to write quite a bit of code to create a prompt, engineer the prompt so that it can do the orchestration, but then also integrate with your systems, your company systems to invoke the APIs through Lambda functions. And then actually have the information sources hooked up so you your agents can respond to the user queries as part of the interaction.

So we took this set of problems and created agents for Amazon Bedrock. And the goal was really simple - to take a natural language instruction from the developer behind the scenes. We apply chain of thought prompting and create a prompt which is customized, which is optimized for orchestration. And all of this is done securely and privately so that you have an enterprise ready system.

There are a few things that I'd like to get to as we get through the next few slides, but our goal was to provide you with the control, with the visibility and with the security as you get this orchestration done.

In addition to making agents generally available earlier this week, we've also provided something we call the prompt editor that Mark is going to cover and a chain of thought trace that helps you go through the different steps. So that's control and visibility and the security is through a secure call to your Lambda functions.

So that's agents for Bedrock. It helps you break down a task into multiple steps and then execute these steps in an automated manner.

We talked briefly about this but just to summarize - it creates a multi-step orchestration for you. It simplifies building and deploying AI generative AI applications, which is really the founding tenant for and the goal for Bedrock. And it provides you with a fully managed infrastructure on top of Bedrock for this orchestration.

So we're gonna talk about the agent basics and I'd like to welcome Mark to cover the next section.

Mark: Thank you, Harshal. Microphone check. It sounds like we've got some volume here. Thanks everybody for coming. This is day four of the conference. Who's got energy here at the end of day four? Alright, there's one. Ok. Here we go. Yeah, for the next few minutes I'm gonna take you through a basic understanding of what Agents for Bedrock is all about. Talk about orchestration, talk about action groups that you make available for agents. And I'll also spend some time explaining a little bit about what would you use agents for - interesting capability, but it's not trying to solve world hunger. What is it good at? So let's, we'll talk about that as well. Ok?

So let's jump into it. Agents let you put together a set of available actions and a set of available knowledge bases, but you also get to provide instructions. So you create an agent, you give it instructions, you make available some APIs, some knowledge bases and then it uses Bedrock under the hood to respond to requests.

So the next few slides, I'll just give you a few quick examples of how you can use instructions, actions and knowledge bases. So a little toy example here, you could just use instructions on their own and create an agent in about two minutes. So here we're saying, you know, give it an instruction to just make jokes, let's say. So you could say "Why did the AI go to school? To improve its learning rate." Ok. That one fell flat.

Here we're just using an agent to just work with an LLM, not very useful. The power really comes in when you say let's make a meeting assistant. Here we have some instructions, what the agent is good at and now you provided some actions here - we've got some meeting actions, list available meetings, get action items from those meetings, maybe a set of utilities that are available, send an email, get a list of people on my team.

So this agent you could say, hey, get me action items from this meeting. It could go out and transcribe the meeting, summarize it for you using the LLM and then you could say, hey, send it out to my team.

What about an HR policy assistant here? We're saying not only can you use action groups but you could use knowledge bases. Now, in Adam's keynote, he, he talked about Bedrock knowledge bases being generally available. And what a knowledge base is, is the ability to take advantage of vector stores and embeddings models in a fully managed fashion.

So just point to some documents that I have with, let's say policies in them and have a fully managed knowledge base, you can take those and plug them into agents. So now if your agent needs to know about things like vacation policies, you can have an agent where I could say, hey, am I gonna get any vacation days? If I switch to part time, the agent is able to find the available knowledge base, execute that search and then give back a nice natural language answer.

Now, you don't have to use action groups or knowledge bases, you can put them together. So here's another version of the HR knowledge, HR policy assistant. Not only can it look up information like that, but you could get your vacation time approved. So here we've got a simple action group. It's got one action available on it and it wraps some existing APIs under the hood.

Now you can say, hey, how much vacation do I get each year? It can figure that out using a search against the knowledge base. And now, and this is important for me - I'd like to take off December 8th to 15th. We'll see if they let me. In this case, it goes out and it can actually invoke some existing API to get the vacation approved.

So the idea is you have the flexibility to define what an agent can do, put together some actions that wrap databases or APIs that are already existing and plug in knowledge bases and the agent is capable of figuring out a plan to execute whatever request you give it.

Now, Harshal mentioned this earlier. I just want to reinforce that with all of our services, security is really important. Agents are very secure as well. You've got IAM roles, you can deny or grant access to individual actions and agents and knowledge bases. And fundamentally Bedrock is how the LMs are invoked under the hood. And that is all fully secure as well.

So you might be thinking how hard is it to create agents? The majority of all of the work that you want an agent to do, you already know how to do it. Companies for decades have been building out microservices. You've got existing operational data stores, data lakes, it's all there. So all we're doing is building on that foundation and letting you wrap that and now have agents orchestrate across that.

So here we've got HR policy documentation. We've got some vacation microservice that was built eight years ago in Java. Let's say we've got a leave of absence database that's around and they're not integrated at all. You can build an agent where your knowledge base is fully managed, consuming those policy documents. And then when you use your time off agent, it will automatically search that because it knows that's where it will find answers to those types of questions.

When you go to say, how much vacation do I have available, you put that in an action group, you describe it so that the agent knows when it's relevant to the conversation and it's able to use the existing service, same thing here. Maybe there wasn't a service available, but there was a database and now you're just building simple lookups in the database. So you're building on enterprise resources in order to be able to orchestrate those more effectively and dynamically.

Ok. What are some of the things that you could build? These are just a few ideas. I'm hoping that you're going to come up with a lot of ideas and let us know what you're building. But consider a lot of tech support type functions are good candidates here.

So I don't know about you, but my laptop breaks down pretty regularly. And sometimes I'm waiting a little bit too long to get help, right? So why not give some help to the support person that's trying to help me and have a laptop support assistant.

So you could take that existing wiki out there that talks about how to deal with laptops. You could take a troubleshooting guide about common problems, put some knowledge bases on top of that, make a couple of action groups here. Maybe something that can pull up support history because that's all tracked somewhere. Maybe there's some ways to submit a laptop replacement. Piece of cake, right?

So you describe all of this to your agent and now you can ask these kinds of questions - hey, I ordered a new laptop, but where is it? You could automatically be looking it up based on the tracking system that's already in place.

What about ticket triage as another example here? So you've got support policies, you've got people available to help solve certain problems. So you've got a skills profile, you could have actions that would help route tickets or maybe classify the tickets as well, maybe they're used in combination and this brings up a really important point.

Agents are not about hard coding a specific path. You can do that today with simple workflows. The power of agents is it can dynamically determine the best path based on the request and the resources you've made available to it.

You can write in any language, you can write all kinds of custom hardcoded applications. What an agent is good at is taking a request, looking at what are the things available for it to use and dynamically figuring out a plan and then executing that plan.

Uh what about a uh pro product review helper? So uh you've got some product or set of products that you manage and people write reviews, sometimes they're good. Sometimes they're not so good. Maybe there's social media posts happening out there and you want to help more quickly respond to those. So you could have something that was event driven here and trigger an agent that would automatically take that and use an llm and some other history and maybe some policies about how we're supposed to respond and how we're not supposed to respond automatically come up with drafts and hand those drafts to a human to decide. Hey, i'm, i'm gonna go post a reply to that. And now you're really quickly turning that around instead of seeing a bad review sit there for three days. Now you're able to turn that around and get it done, uh, a lot more quickly.

Uh, tractor maintenance. How many people here do tractor maintenance? Uh, just kidding. Uh, but you could picture any sort of maintenance type situation. Uh, car repair manuals, you know, that big car, uh, manual in your glove box. That's got a 500 pages in there. You could easily just pop that into a knowledge base and then ask simple questions, but it's not just question answering, it's taking action. So here we've got a repair calculator. You know, you'd wanna know pretty quickly. What's it gonna cost for me to do xy and z? You could put that behind an action group and do simple calculations. Now you're using an llm, but an lm doesn't know about these things. So now you're filling that void and giving uh something really actionable on top of uh ll ms.

Ok. Let me dive quickly into orchestration. I've alluded to it. Uh hopefully you get the basic concept. Now, i want to take you. This is a level 300 talk, right? Herschel. Ok. So we're gonna go a couple of clicks deeper here.

Uh first at a, at the high level again, when you give it a request agents for bedrock takes a request, sees what's available in its set of actions and knowledge bases and comes up with a plan. It's kind of like if you are building a sql engine, somebody gives you a sql query and you come up with a plan, right? And they execute it if anybody's done that before.

Uh so that's what agents is doing. And once it has a plan, it will execute that it'll look at what happened and it might redirect a little bit, maybe. Course, correct. So it thinks about the next step and it keeps going until it's got a good answer. So it evaluates each time and say, hey, this is what the the user asked me. Here's what i've got so far. Looks like i'm done and that's when it gives you back uh an answer or it may say, hey, that's a nice request, but i really don't have any available actions or knowledge bases to deal with that. So sorry. No, no answer.

So double clicking to the next level, somebody sends a task into a bedrock agent harel alluded to all the heavy lifting that would happen if you wanted to do this on your own. Has anyone here tried to build uh chain of thought, prompting a few, few hands. So you know what i'm talking about here. The prompt engineering involved is very complicated, very difficult to get right here. We're saying automatically the agent puts together this prompt that takes into account the conversation history, the description of the actions, the descriptions of the knowledge bases the instructions that you provided. And then it puts the task in there as well. And then it does this chain of thought algorithm where it starts executing the steps and the steps might be an api call using an action group. Or it can be a knowledge base doing a search and it's able to use the combination of those in whatever order the agent figures out it needs to use and then it gives you back the results.

Now the agent takes those results and they're a little bit raw. It might be a list of things coming back from a database. It might be a big, you know, chunk of text coming back from a search and then it uses a bedrock model to give you back something consumable. Ok?

Um one example, and i'll reinforce this with a demo um an insurance claims agent. Uh the question here is send a reminder to policyholders that have any missing documentation and include the documentation requirements. So the agent thinks and it comes up with a plan. Hey, to answer this question, i'm gonna do these four things and then it starts get the open claims, it grabs data from your request. It may have to figure out what that data is. It may have to prompt the user even uh and it passes that along, it gets back an answer and then it looks at that and what should i do. I'm gonna go on to the next step and it can iterate as well. Uh let's say you got a list back. It could iterate on that list. Hear him saying, um get the policy i ds from the uh associated with these missing documents. Now, it says that i'm gonna need to go out and look in my knowledge base so that i can construct a proper email. That includes a little bit more information about what the actual um customer is going to need. And then it says uh send the reminder and it it does all of those actions and gives back uh an answer and you'll see this part in the demo as well. But just a little preview here, herschel mentioned a reasoning trace. How many here would be comfortable with a mission critical request being handled by an agent without knowing what it was doing? There's a guy in the back i think, yeah. Uh you don't want to do that, right? So what agents for bedrock does is gives you a very well organized, easy to use trace. So this is a picture of the test console. You can run a request, you can say show trace and see exactly what's going on. You can get it back from the api as well. You'll see a demo of this and the last piece is prompt editing. There are some use cases that maybe you really have some advanced uh logic that doesn't fit the basic flow here that agents already supports. Well, you're not stuck because it's not a black box. You can actually see the underlying prompts and you can change that. This is an advanced use case, that's more of a 400 level session. Uh but just so, you know, uh that hook is there and i'm gonna skip over this one. Ok?

A couple more minutes here and then i'm going to hand it over to sean to show you what a real customer does uh with agents. So action groups just to kind of cement home the concepts here, it has three parts. One is the description of your action group overall and then one, the next one is the api schema and api schema just says i have eight methods. Here's the list of them. For each method you give the method a description, you say exactly what parameters it takes, you say exactly what it returns. And this is important, let's say uh you had hired a brand new developer and you said here's my api go do such and such. If your, if your api documentation is nil or poor, what kind of results are you going to get? So this api schema is critical, you have a rich description of what the actions are that are available, same thing with the uh so and then corresponding to that is the implementation. So i've got if i've got an action to send email i describe what it inputs, it takes what it gives back and then i implement it simple lambda function. Lambda supports nine different implementation languages. Python is very popular javascript, but you could do java, you could do c# a lot of flexibility here for calling agents or for implementing agents.

So in the lambda function, typically it's gonna be a few lines of code because you're calling something that already exists. That's kind of the classic way to, to build out actions. So as an example, uh that utility action group i mentioned earlier, you'd make a description, hey, this action group gives a set of commonly used actions. And then in the schema, you'd say it's got action one and action two and you write that in uh json or yaml or you can generate it. I'll show you how to do that. Uh and then you write the lambda uh which implements the actions that you described and that's pretty simple as well and i'll show you in a demo.

Uh ok, very quickly here. Uh this sounds like it might be a little bit of extra work to do to write those definitions. It's actually not too hard. But, and then uh what about writing the lambda code? Uh we wrote, we wrote a little uh party rock application here. You can try this out on your own. You can basically describe your api tell it some methods that you want here. I say multiply two numbers. I say i want the schema to be in json. And uh a few seconds later it's spitting out a perfectly formed open api scheme of that's a standard uh definition and it's able to generate that. But there, there's a lot of examples out there. You can just copy and paste and edit and here's uh the lambda generated as well. Ok.

With that, a couple more slides and we'll go to uh a thin one of the things that uh when people first hear about agents is it always seems like more of a something you would do in the console, you'd uh give it some request. It's more of like a chatbot feel. Um but it's when you build and prototype a new agent, that's true. But you get to deploy these, you get to deploy them in production. It's not just a simple piece of code that you're running uh separately, you build agents and then you use them from applications.

So how does that look for building and testing agents? Here's somebody building an agent, they use the console or they use the sdk. Uh they create a draft of that agent, they iterate on that add actions, take them away, fix their lambda function, test it thoroughly. And then when they want to deploy it, you get to create a new alias, we've got versioning built in and then anyone that wants to use that agent from a website or an application uh or a script, they can just build an application and invoke that agent via the alias you send in request to invoke agent. You get back a response, you could even make it event driven. So we're giving you the tools to not only make prototypes but create real agents that you can deploy to production.

Here's some simple invoke agent code on the left you pass in the text, the agent id, the alias id, you can have a session so that you can have a conversation.

You can enable the trace to get that reasoning trace. Uh and then you get back a stream. Uh if you just want the final answer, you, you get that back or if you want to process the, the response trace, you can get that as well and i'll skip over this here. This is just a little bit more detail. Feel free to take a picture if you want or go online to look at the agent's documentation, it's all there, but there's a full set of api s there for creating agents just like there is for any aws service.

And with that, i'm gonna switch to sean and then we'll come back to a demo.

Thank you, mark. It's great to be at uh re invent with you today. Thank you for taking time out of your, your schedule to come and, and explore agents with us.

So, mark said my name is sean swanner. I'm the chief technology officer at athene. We are a leading provider of fixed and fixed indexed annuities and we're based in, in west des moines, iowa. In addition to our retail business, we have a reinsurance business and an aspect of this reinsurance business is when we take on a block from another carrier. There's a large set of data files that are are sent to us for processing. We're an insurance company, there's actuarial work, there's processing work, data science work that must happen. In order for this business to to be properly handled by our systems. This this process of doing this is time intensive and block by block.

Over the years, we take on a set of data, we write code to load the data and it's accumulated over time, a lot of different ways to do the same thing. So recently we started a project to modernize this code. The first thing that we need to do is we need to look at the code, we need to look at the data and we need to map the data. So in this, in this work, a given data element may come in from a customer file and it will then go through a series of transformations in code and be mapped to a data value in our systems. So we take these inputs and then in order to figure out how this data works, a set of highly trained analysts will spend hours going through the code, going through the data, creating mapping, documents, creating system documentation and design documentation that we then turn over to our developers to write the refactored code.

So our developers then take this this data mapping, take the documentation and then using our current kind of standard approach to as part of this project, they then re plumb the data from what the client sends us to what we need it to be on our systems. This is a manual process. Every, every customer that we have in this reinsurance business has their own file format. We we're in the business of serving them. And so they, they send us files in, in the format that works for them. Some customers send us one large file that has everything. Others will send us a zip file with, with dozens of different files. Sometimes it's in cs v, sometimes it's in xml. These files have again, no common consistency to them. Uh they're very large and so for our analysts to even open the files, some of these cs vs have 100 plus columns, there's have so many rows that they're difficult to open up on a laptop uh for review and processing the code that we have is like any legacy code. It's, it's spread across multiple python modules and multiple repos and it's difficult to, to trace the code as it works its way through the data file to, to process the output. Um a classic etl problem.

So what, what does this mean for us is that as we go through this project of modernizing this block by block is it takes an extraordinary amount of time. So like many of you, we spent the the majority of 2023 really interested in how generative a i could help solve business problems. And in particular, in talking with our aws partners, we, we discussed the the capabilities that are, are are available to us now in bedrock and working with aws, we took and built a set of agents and we did this.

So here's the first step that we did. And and what mark was describing that same process as what we took to accomplish this, this, this goal, we took the data files and we took the code, we put them in an s3 bucket, then we defined an agent and we simply told the agent that your job is to generate the mapping document. And i'll give you an example of a mapping document in a moment. Then we set an action group that did a couple of things. First is we had the agent go through and read the data files and build a map of, of the data. And again, these, these data files contain hundreds of different elements. Some of the data elements are sent to us by the customer, other data elements are determined by our processing code as the result of a calculation of one or more data fields.

Then we had to figure out how does a given data element flow through the data files. And again, some, some are very literal and kind of just in the file. You can see it others, it may be different python modules work together through a series of imports in order to run the calculations to determine that value. Once we have that figured out, we're able to create a data mapping document. And this is very simple, this is just a simple markdown document that that lists out all of the data elements for a block and helps us understand how the data, how we get the data and how we process it.

We then take that document and we put it back in that s3 bucket. And this is where it gets really interesting because then we created a second agent and the second agent is a knowledge based agent and it takes the information, the data mapping and the original source code and the original data. And it allows us to have a prompt where our staff can then ask a series of questions about it.

So you think about this, if you're an analyst on our team, in order to get to the point where where the agents have you, now you go to spend a lot of time manually going through taking notes, creating spreadsheets, diagramming things and putting this together. Now our analysts can log in, they can go to the agent, they can ask questions, they can say for this value, how does it, where does it come from? How is this calculation determined? Where in the source code, can i find the the processing for this given data element? This helps us out in a number of ways.

One is, it saves us a lot of time. Second. And i agree with what mark was saying that we, we would not have the agent write the final system specification. That's the job of the humans, the highly skilled humans on our team. But what this does do is the agents are able to provide draft language that describe a data elementary calculation. There, the agents help validate the understanding of the analyst. They can ask it questions and and kind of double check their their understanding of of the data and the code. And this all comes together to save time.

In addition, when we give that documentation to our developers, we can also give them access to the agent as well and they can ask questions and this will save time where instead of the coders asking the analysts, they can start at least by asking the agent their questions.

Ok. Here's what it looks like. I know 12 rows may not seem like a whole lot, but this is a, this is a subset of the hundreds of, of rows of, of data that that we have generated by the agent. And what we have with this agent is is this mapping these columns of data are are it would otherwise take a lot of time to build. And this provides a documentation reference that both feeds the documentation that the humans are are creating and powers the knowledge base for the other agent.

And then on the right, you can see an example of this of the agent prompts and the ability to to ask questions in a chat feature chat interface. And this is this is using the knowledge base in a rag style way to say the the prompt. And and we're using uh clad two on the on this particular example. But it's also integrated with the knowledge base that is has all that information in that bucket. And we take that information, we then have a a chat response that goes uh and interrogates the knowledge base to give the answer.

So what did, what did we get out of all this work? Well, the first thing is the agent was very effective at handling the large data files. And, and again, for, for our our analyst even opening the files in some cases was a challenge. Um the the agent was able to process these large files both wide in terms of columns and long in terms of a number of rows do analytics on the file. And the agent is is good at taking the python source code files and reverse engineering the calculations out of them.

In addition, the data mapping document that i showed on the prior slide is is a very valuable output um that saved us a lot of time. And the agent uh with the the q and a ability is gonna create kind of a, a lasting and durable uh asset for us. Um one of the nature the nature of of reinsurance is that we this block of business could be on our books for many years and it may outlast you know, some of the developers and staff on the team by having this documentation and this agent will provide a kind of a an active live documentation source for us.

So in our project plan, we had a phase where the analysts were to spend a two week block of time doing this data mapping and this analysis from the point that we loaded the code and the data into that s3 bucket and ran the agent. We were able to produce the data mapping file in a matter of minutes. What this allows us to do is give our analysts more interesting and valuable work to do. It's it's not fun writing, going pouring through someone else's old code and figuring out out out the business logic, what these analysts can do instead is do higher value things. There's a lot of work to do. Once we get this etl process done, there's a lot more work to do mapping it into our data model. Figuring out whether the data is is suitable for use in our actuarial systems and the these staff can then focus on those those outcomes.

Um but we're not done. I think it gets better like we're, this is, this is a plc that we put together fairly quickly, but with some impressive results. The next thing that we're gonna do is we're gonna take these agents and we're going to expand them to more blocks of data and some of these blocks of data will have different, different data formats, different structures and different expectations on how we need to load and process the data. Some of our code that processes these blocks of data could be more challenging. And we're gonna keep evolving and adapting the agents to do more and more uh uh work like this second. And this is a key takeaway for you is we will always be in a process of fine tuning our prompts. This is a key thing that we've learned as we've worked with all of our, our agents and our a i work is that prompt engineering and prompt tuning is not a once and done activity that as you go through those screens that that mark will show in the demo coming up. You see there's opportunities to use natural language to explain to the agent what the agent should accomplish. And it's important to continually evaluate and tune that.

Um this week, you know, we heard that now bedrock supports cloud 2.1 and that moving from even 20 to 21 should trigger a retuning of your prompts and a re evaluation to make sure that everything is still working correctly.

Then the next thing that i'm really excited about is the goal of this project is to actually refactor our code, not just to do the analysis. And i'm really confident that we can take an agent. And we can say, let's say that you took this data element and you mapped it out from the from the customer source file through the transformation into our standard data model. And now we need to do something with it or even put it in the standard data model, the agent with some training with some sample code and with some guidance, i strongly believe we could have an agent that then says here's the proposed code to do that work and to do that transformation and actually have the agent participate in some of the code writing work that we have to do in the future.

This is where it gets really exciting because now what the agents do is across all elements of our project, we take the kind of the grunt work if you will kind of the repetitive, more tedious work and we have agents do that. And then that takes and allows our our highly skilled staff to focus on more meaningful and value additive work. And i'm really excited to get to that point.

And with that, i'm going to turn it back to, to mark for the demo.

Thank you.

Ok. So i'm gonna take you through a couple of quick demos here. I wish we had another hour or two. I could take you through a uh a world tour of uh agents. But the first one i'll do is remember that slide where i was showing you an insurance claims agent. So here we've got uh an agent where we're asking a simple, what looks like a simple request. It's actually not that simple send reminders for all open claims with missing documents

Now, I've already executed that and I hit what was called the show trace button. And over here you see the reasoning trace. So we've got multiple steps and I'm gonna expand step one here and look at what happened.

So here it's saying to answer this question, I need to call, get all open claims and then it's going to loop and for each one of those open claims, I'm gonna find out identify missing documents because that wasn't part of what came back uh from the claims list. And if there's any missing documents, I'm gonna call, send reminders to generate an email to the customer telling them here's the claim id, here's what you're missing. Here's what you need to go do. Maybe a url to take action on it. Uh and so that was the first, that's the plan and then it executes the first part of that plan which is, hey, I'm gonna look up claims here's one of the api s that was defined in our api schema and here's the output that came back.

So you've got a full trace here. What was the plan, what actions did it find? What knowledge bases did it find? And now it's actually executed and it shows you what did it pass in and what did they get back? And you've also got lambda cloudwatch logs as well. So you could go to that and say, hey, here's the input payload that i got. Here's what i did whatever logging you want to do in lambda itself. Uh and, and this keeps going.

So step two step three, let's see what it's doing in step three, step three, it was calling step reminders, it passed in a claim id. Uh and uh that was claim 006 in this case and it knew that it was missing accident images. You could hook hook in a knowledge base here. We'll do a search. What does it mean when you're missing accident images? Well, tell the customer to get a picture of the windshield of the side of the car, the back of the car and so forth. So it could look that up, pop it into uh the email as well. So there is aaa simple uh example of using uh an agent.

So to show you a little bit more detail here is uh in the console, uh I've got the actual agent and I'm looking at the action group details. So you remember, i said there were three parts, there was the action group description. Uh let's see, i think it was on the previous page. But here we're looking, we're editing the details and we can see the api schema. It's called an open api standard. You can look that up a standard way to describe uh interfaces. And this description is important because that helps the agent know, hey, what, what are these actions good for? And more importantly, here's my first api and it gets a list of claims and it tells you what comes back, it comes back with an array of objects and here's the detail. It includes a claim, a policyholder and so forth. So you describe richly what goes in and what goes out and that gives the agent the intelligence to know how to orchestrate these. What plans should i take? What sequence of steps should i be taking?

Uh let's see, let's show you another example here i was talking to a customer the other day and it was a drug company and they're saying, yeah, that looks pretty cool. But can it do this? Uh we have perception surveys that we do? Hey, how good are my uh drugs doing in the market? What do customers think? Are they working well? Do they have a lot of side effects? Are they cost effective? Is it easy to administer, we survey them quarterly. Uh can an agent figure out how to make a meaningful sense out of that to identify trends?

So i put together quickly an example here and here, i'm saying, how is this product trending throughout 2023? And the actions available? Uh the only action that was available was a single one which is look up the quarterly survey. So it has to interpret what is 2023 mean. It means i need to need to make four different calls. Q one, q two, q three, q four. So if you look at the trace, it says in order to answer this, i'm going to call get perception by quarter uh for each of the quarters and then i'm gonna do some analysis.

So if you look at the answer that came back, it said, hey, this looks like the scores have been steadily increasing throughout the year. Uh usability scores and others have gone up each quarter. Um affordability has remained stable at, at, at four throughout the year. And so it's not just a simple set of steps. You can iterate, it can figure out how to make multiple calls to the same action and so forth.

Uh let's see, ti this is time remaining just making sure. Ok, so i'll go a couple more minutes here. Uh here's an example i, i like um consider uh a customer relationship management system. Uh I'm sure you've got one. Maybe you're an account manager or a sales person on a part of a sales team and you're about to meet with a customer next week, but you happen to work with a dozen different customers and you can't keep track of everything that's happening, right?

So here we say, ok, agent uh suggest a meeting for me with this customer. Uh give me the meeting topic, give me an agenda and um tell me when i should have it based on the customer preferences. So what it ends up doing is it's uh it looks up the last five interactions, the last five activities we had, which have a date and which have a set of notes. It looks at those set of notes and says, hey, i can see what's going on here. Customer got their budget cut. They weren't happy about the reliability of our service. They wanna look at this new service. And so it comes up with an idea uh and an agenda uh automatically and to show you an example of how that might work if it was deployed ah into an application.

I used bedrock to generate a streamline app because i don't know how to write user interface code. And here i'm asking the agent, see i deployed the agent. I'm not in the console anymore. I deployed the agent. I made a new version. I got an alias and i'm using the invoke agent api from an application. This will be the classic way that you use your agents is from an application and because it's a live demo, uh it's, it's doing a lot of thinking here and there it is. Ok?

So we're getting back the trace. So I'm showing the trace on one side and the actual response on the other. So it thought enough to call list recent interactions. It looked up preferences. When did this customer like to meet? Uh and let's look at what it came back with. Uh let's see. So here is a set of meetings we met on this date and here's uh some notes that were taken. So we pulled this right from the crm system and it took this set of activities and then it used the power of the llm as part of the agent to say, hey, here's the, here's the meeting that we're gonna have. Give me a demo, go through the pricing details and uh here's the agenda and they uh they like to meet on wednesday afternoon. So there you go.

Ok, so the last piece i'll, i'll show you quickly here is that um we talked a lot about knowledge bases as well. So i'm going to uh i put together a quick knowledge base. I downloaded about 50 irs publications. They each had five or 10 up to 50 different pages of pdf information about tax policies. I hate doing taxes. So i run this question and you see how fast they came back. So not all demos uh are painful.

Uh here i said, am i allowed to deduct, deduct vehicle maintenance like oil changes? Uh and it came back, this is using an api and knowledge basis to just get you back the related chunks. Ok. And then i can also turn on. Ok. Just show me what it would do if i generate responses and i can pick a model here. I've got uh clawed instant being used. I'm going to run the same question. And in addition to doing the look up to get these are the five relevant things from the knowledge base. It crafts an answer for me. So it gives me an answer. It gives me a citation to go back to the original document. And this is knowledge basis as part of bedrock and you can plug these into your agents with your action groups as well.

So uh last piece and then we'll have a few minutes of q and a. Um i wanted to show you one more thing and that is you can use agents to create agents. I know sean showed some really cool things and he's got some great ideas about where to take it next. Well, we were sick of writing our own agents. We wanted to have agents, write agents. Uh so here i said, hey, make me an agent. Uh that will take a greeting as a parameter and if the greeting is hello, it returns, hello, world it took about a minute to uh create that for me, it came out with the agent. I didn't write a single line of code. I came back to the console. I found this agent that was made for me. I entered, hello and guess what? I don't know if it'll work again here, but uh you can see it work twice before. Hello world. Yes.

So, and this is the lambda that it created for me and it called the api s to create the agent, associate it with the lambda function, deploy the lambda function and so forth. So uh with that. All right, i like it with that. I think we'll switch back. Uh and uh maybe herschel and sean, you want to come up on stage as well.

Ok. We're out of time. But thank you very much. We can take questions outside. Thank you very much. Thank you.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值