Increasing mission impact with generative AI

Mike: Hi, everyone. My name is Mike. I am a Principal Solutions Architect with AWS and we are going to be talking about generative AI today.

So how many of you by show of hands are with a nonprofit organization or with a mission based organization? Ok, so some of you, that's great. What I'm going to be talking about while it applies to nonprofits and mission based organizations, it really can apply to any organization. So just keep that in mind.

Now when we talk about generative AI, there's a number of categories that are common use cases that we tend to see. You know, there's the, the obvious text generation but, but honestly, among the organizations that I work with, I don't see a ton of net new text generation happening with generative AI where I really see organizations spending their time is with some of the top three, the Q&A, text summarization, text extraction and more and more some of the code generation.

So I'm going to talk mostly about those things in terms of Q&A, you know, when you have your users or your members interact with you through your website, you want them to interact in a way that's normal and natural. And so by having a Q&A bot, that's a very natural way that your, your members and your users can interact with you. I'll give you an example of that later today in terms of text summarization, you know, uh we, we see organizations wanting to take large amounts of text and then being able to summarize and figure out what that large text means.

A great example of this is if I'm a financial organization, I can ingest a number of PDFs from say the IRS, maybe I want to understand can or what are the implications of filing or not filing a particular form. There's a lot of content you've got to ingest and process to be before you can make that decision. So that again, that's a common use case that I see.

And again, text extraction, this is an area that I think is underutilized, but I'll give you an example of how I see organizations using this today. And then of course code generation, you know, back in uh kind of the advent of, of spell check, there was a lot of controversy around is, is this cheating? Can we use spell check? Especially if I'm writing an academic paper or a paper for college.

Now we're over that we just understand that spell check is just part of uh the tools that you use to write the best thing that you can. And I think that we're going to see the same thing with code generation. It's not going like Swami was saying this morning, it's not designed to replace the human, it's designed to augment the human and help you do the best job that you possibly can.

So let's get into these in a little bit more detail. Uh the, the first demo that I wanna give you today is around summarization. And the scenario here is you, you know, I'm a busy person. There's a lot of meetings that I have to attend and every so often I can't attend a meeting. Now, fortunately, they do record those meetings and, you know, I can get the, the meeting recording, I can listen to it. But, you know, this meeting is usually an hour in length, finding that time to sit through the that meeting recording is hard, right?

But, but I really need to sit through that or get something out of that recording so that I know what my next steps are at the next meeting. So I'm gonna show you an example of how we can use generative AI to summarize meetings for us.

And in this example, I've got, I've got, uh let's see, I, I've got my meeting recordings that are happening, those, those recordings I can then dump into an S3 bucket. And once I dump that meeting recording into an S3 bucket, it kicks off a step function workflow and that step function workflow goes through a number of tasks.

It kicks off Amazon Transcribe, Amazon Transcribe, peels off the audio from that recording and it essentially transcribes that, that audio or that, that audio into something that I can then use. If you look at the, the output from Amazon Transcribe, it's not super usable by a large language model.

So I send that transcription off to a Lambda function to then turn that transcription into something that's a little more usable by a large language model. I then send that transcription to, to Amazon Bedrock and say, ok, given this transcription, please tell me, you know what happened in the meeting. Give me any action items and anything else that I should know from the meeting, the results from that transcript or from that from the generative AI model is then sent to an SNS topic that I'm subscribed to.

So then I receive an email with a uh uh an email that shows me all the results of that meeting. And you know, I've now saved myself an hour of listening to that meeting.

So let's see how this works in practice. I've got my S3 console here and I'm gonna show you the process of uploading a video to this S3 bucket. I'm gonna upload that video and once I upload the, the recording of this meeting, I'm going to move over to the Step Function console and we're going to see that I have a new execution of my state machine going ok.

So here in my state machine, I can refresh this. And I see that I have a new execution that's happening. I click into that and you're gonna see the steps that I talked about where I send the job off to Amazon, Transcribe to transcribe that audio. Once that's complete, you're gonna see that I send it off to a Lambda function, this parsed transcription function, it then summarizes that transcription through a large language model. And then it sends those results to SNS to publish.

And at the end of the day, I get an email summary like this and I'll give you a second just to look at this so I could read this and I would know exactly what happened with that previous meeting. I would understand what was discussed, what any action items are and I'd be good to go.

And this is a, a real uh meeting that I used in this example from the National Telecommunications and Information Administration. So just so you're aware like this is real data that you know that we're using here.

Ok. So that's the first example that I wanted to show. The next example I want to show you is using intelligent chatbots. Now again, like I said that when your members or your users interact with you, you want them to interact in a way that's normal and natural and a common way of is interacting with you through a chat bot.

So you want your, your users or members to be able to ask questions and you want to be able to give them then answers, but you want those answers to be based on your knowledge base.

So let me show you what we're gonna be doing today. In this case, I have a chat bot that I'm using through Amazon Lex. And in this case, I'm using a Lambda function. Uh so that once a request or a conversation comes into my chat bot, I send that off to Lambda and I have my knowledge base in Amazon Kendra. But I could store this in OpenSearch. I could use the uh the some of the new features with, with Amazon Bedrock that were announced this week.

But my, in in this case, my knowledge base is being stored in Amazon Kendra. And when a request comes in, I want to first query my knowledge base to find any matching content to that query. I then want to send that that content off to my large language model to Amazon Bedrock. And I want to say, ok, given this context, answer the user's question.

Now, the example I'm going to show you here is from a nonprofit that I work with that runs a series of no kill shelters. One of the problems that animal shelters are seeing is that a lot of people tend to adopt cats in the springtime and then they tend to find that over the course of the year, they can no longer care for the cat either because they, you know, mostly because they don't know how to care for the cats.

And so they tend to abandon the cats either on the street or in shelters later on in the year. And so this organization has created a series of kitten care information. I want people to be able to come to my site, ask questions about how to care for their cat and receive information that again, in a normal and natural way that's based off of my knowledge base.

So here you can see, I have my kitten care chat box and I'm going to start my chat bot by asking a question. They, so I'll ask the question, what do cats like to eat? I have no idea. So behind the scenes, this request is going to Amazon Kendra. I'm querying my knowledge base to find that set of information for my knowledge base. And then I'm sending that off to the large language model to answer my question.

And I can see that cats like to eat meat, like beef, chicken, turkey and fish. I ask another question, what are fun ways to play with my cat? And again, the same thing is happening where the request is going off to Kendra to retrieve information from my knowledge base. And then I'm sending that information then on to the large language model where I say ok, this is my context. Given this context answer the question of what are fun ways to play with my cat.

And you can see that I have a response that again is normal and, and natural for my my members.

The next example I want to talk about is text processing. This is a area that I think is drastically underutilized with large language models. But I think there's a lot of power to it and prepare yourself. I'm gonna show you the simplest architecture diagram that you will see it re invent this week.

Ok. All right. So I'm using one service. And the reason I'm showing you this is because I, I wanna send unformatted data into my large language model and get formatted data out. Let me give you a ar a reason why I might wanna do this.

I work with an organization that runs a large genealogical database. And if you think about collecting genealogical data, there's a lot of unstructured data that you have to parse. For example, you have to parse newspaper articles for birth date and death date information. Most newspapers don't have that kind of information now. But you know, 10 years ago, certainly 100 years ago, birth and death information was always in the newspaper.

So I wanna be able to parse newspaper articles again. This is unstructured text and I wanna be able to have that text be turned into a format that I can then import into my, my database. Ok?

So in this case, I'm just gonna show you the Bedrock playground. Now, in the real world, I would probably do this through an API and you, you know, have some sort of front end, but we don't need that for this specific example.

So if you look at this example, I in this case, if if you know your large language models, you can tell that I'm using Claw specifically, I'm using Cloud v2. And I'm giving some information, I'm giving a newspaper article where I say on this date, here's the birth information, it came from this newspaper and then I'm telling it this is the format that I want you to provide to me.

So you can see that in this format, I identify each of the, the people that are mentioned in the article, I identify the relationships they have to each other. The uh I identify who the principal of the article is. The principle is going to be who the article is about. So if it's a birth information, the principle is going to be the person being born. If it's a death article, it's going to be the person who died.

And if you look through this, you can see that it's correctly identified all of the right information and all of the relationships with everybody else. This information in this format is something that I could then import into my database.

I then give it a new article, you know, and I say given the same format I want you to or given, you know, given this new article, I want you to generate the same format for me. So let's let this run and we'll see what we get.

All right. So you can look at this and if, if you want to come up close to the screen and really read everything that's there, you, you can do that. But, but it has like correctly identified every single person that's mentioned. And in this case, it identified the correct principle. So the person being born, it identified the person's birth date. It identified the parents, the and that they're related to the person by being the parent, it identified a brother in that relationship, it identified the two sets of grandparents and that they have that grandparent relationship with this individual.

So again, you can see in this example that we can use a large language model to take unstructured text and turn it into a format that we can actually use. Now again, in this example, I'm i'm providing or i'm, i'm publishing this in a flat file format, but I could just as easily have printed this in XML or JSON or another format that makes sense for my business and my organization to yeah.

The next example I want to talk about is using generative AI as a coding companion and you know, really again, the idea here is to not, not to replace the human, but to supplement the human to help you move further faster. And in this case, I'm using Amazon Bedrock.

And if uh how many of you have used Amazon Bedrock before a couple of you? Ok. So take a look at Amazon Bedrock. I love it. And it is absolutely saved me tons of time in writing code that honestly, I probably should know how to write, but I have to look it up and you know what it's not worth it.

So using Amazon Bedrock can drastically save you time. Amazon Bedrock is trained on billions of lines of code from both open source and Amazon code. And there are a couple of things within Bedrock that I find particularly appealing.

The first is that it includes a scanning tool so that it can find it, it can identify hard to find security vulnerabilities. The next thing that I really like is a lot of organizations are really concerned about the licensing of their code. You don't want to put, you know, there are different kinds of open source licensing.

There's permissive and non permissive open source licenses and you don't want to include a non permissive open source license in your code if you're not expecting to also open source your your workload. So Bedrock will let you know if the code that it's recommending looks like code that's licensed, whether that's MIT license on the permissive side to GPLv2 on the non permissive side.

And in fact, in the preferences for, for Code Whisper, you can even specify that I don't even want to see code that looks like open source code. Ok? So you can just avoid that altogether.

All right. So let me give you an example of what it actually looks like to write code in Amazon Bedrock. So I'm here in VS Code, I'm writing Python. This is my IDE of choice. This is my language of choice. And I'm gonna start out by just putting in a comment where I'm gonna read in a CSV file that contains a user ID, a first name, a last name and a phone number.

I'm going to hit enter and you're going to see that as I type file, it recommends how to open that file. I hit tab to accept that and notice that it's recommending that I read the line and I close that file.

I'll write a new comment that says now I want to save the file to DynamoDB and you're going to notice it takes me longer to write the comment than it does to actually generate the code. Ok? So as I'm typing, it's recommending this, this code here and I simply hit tab to accept it.

And in the example of where there are several potential items, I can use my arrow key to see the different options and I can just hit tab when I'm satisfied with the results. So you can see here, I've easily now added my items to DynamoDB and I've even printed out what I wanted to have.

Now, I'm going to write another comment where I'm going to save the original file to S3 and it's already recommended how to do that. I hit tab and I'm done.

So, you know, again, think about what I did here, I opened a file, I wrote out this data to DynamoDB and I saved the file to S3. Now this is all pretty easy to do. Uh but you know, if I'm not really familiar with all of these APIs, this would require me going out to the internet and looking up. Ok? What is uh what is the Boto3 code to do this or that? But I wrote this code in literally 30 seconds.

This is why I really recommend using tools like Amazon Code Whisper. Ok?

So I've talked a lot about how you can use generative AI in a lot of different settings in terms of next steps. A couple of things I recommend we have a number of sessions where you can learn more.

First off, we have two additional sessions today where we're talking about generative AI as part of the nonprofit track uh at one o'clock today. So in about 30 minutes, uh we have IMP 202 Innovating for Scientific Discovery and Health Equity with AIML check that out at 4pm today, we have Modern Digital Experiences to Accelerate Mission Impact.

And then tomorrow at 4pm we have IMP 204 Mental Health Crisis Intervention Based on Analytics and ML I would recommend that you also come visit us at the nonprofit booth. We are here in the Venetian. We're on the third floor uh in the west alcove. So that's our Nonprofit Impact Lounge. We're there until five every day, come by, come visit us. You can pick up some swag, say hi. And if, especially if you're a nonprofit, let us know the problems that you're working on. So we can see if there are ways that we can help you.

And with that, thanks for your time today. And please remember to write this in the survey.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值