Generative AI for decision-makers

So welcome. Got a lot to cover today. So let's get started.

I'm Mark Tron Carro. I'm a senior technical instructor with AWS. Been working in the AI/ML space for about eight years now. Very excited to be talking with you all today about Generative AI for Decision Makers, decision makers come in a lot of different forms. Maybe you're an executive looking to understand Generative AI and how you might incorporate that into your business or you're starting a project and just want some insights there or looking to upskill your workforce or yourself on Generative AI. And that's going to be a lot of the focus here today.

So we'll start with an introduction to Generative AI. We'll move in and cover some business use cases for Gen AI. And then we'll go over some technical foundations and terminology and the purpose of that is really just, uh, we're not going to give you a collegiate level lecture or anything like that, but go over some key concepts and terminology so that you're able to communicate effectively with business and technical stakeholders working within your company.

We'll go into planning a Generative AI project, some considerations there as well as how do we evaluate not just the benefits but the risk and potential mitigations that we might employ and then closing things out with building a Generative AI organization.

So let's start with an intro to Generative AI. And before we get into that, it's important to cover machine learning, have a basic understanding of that. And so at its core, machine learning is where we've got some data set and we're training a model on that data set to make predictions on some new unseen data.

Now, rather than a classical programming approach where a developer may explicitly define some rules, with machine learning we provide the data and machine learning is training a computer, a model, to recognize patterns in that data, to pick up the patterns and signals in that so it can make predictions on new unseen data.

So taking maybe predicting credit card fraud, for example. Example of maybe supervised machine learning where I got a big data set, maybe years of credit card transactions and I've got those labeled as “This is a valid transaction” and “This is a fraudulent one”. And so as opposed to explicitly defining rules, I can take that labeled training data set, feed it into a traditional machine learning model and develop some sort of classifier that's able to pick up the patterns and signals hidden in that data to be able to determine if a new transaction is fraudulent or not.

And AI is not new to us here at Amazon, we've got a rich history over 20 years in that space and we're just getting started. It's a big part of what we do, our past, our present and our future. Back in 2001 or so, introducing, you know, personalized recommendation engines on the ecommerce side of the house, things like Amazon Alexa as well as even in our fulfillment centers, you know, how do those robots work, navigate and optimize the route planning, predictive forecasting, you know, what type of inventory, where do we need it, how much? And even most recently, some things in the Generative AI space like CodeWhisper and Amazon Bedrock.

If you're interested in those topics, we got a ton of sessions for you this week, encourage you to check some of those out.

And before we dive in, we've probably heard some of these terms before, you know, we see Artificial Intelligence, Machine Learning, Deep Learning, all these terms in the headlines. Let's unpack and broadly define them here before we get started.

And so we'll start with Artificial Intelligence. We can think of that as almost like the umbrella blanket term that we use to describe systems that can think or act or reason or do tasks that previously required human judgment.

Now we covered what is Machine Learning, taking that data, using it to train models to recognize patterns in that and make predictions on new data, a subset of Artificial Intelligence.

And then Deep Learning. And when you think of Deep Learning as a specialized type of machine learning, with Deep Learning, really it's the usage of artificial neural networks and that really stems from the biological neural networks in our brain. And with those neural networks, we found that we could tackle problems even more complex than traditional machine learning approaches might be able to solve, leading to advances in computer vision, object detection, natural language processing, those sort of tasks.

And then we can think of Generative AI as a subset of Deep Learning when it falls within that overarching discipline of Artificial Intelligence. And the keyword here being “generative”.

And so rather than simply analyzing and classifying data, maybe like we see with traditional machine learning models, Generative AI is powered by massive, massive models that are pre-trained on internet scale data sets. And we'll talk a little bit about those foundation models here in just a bit.

And unlike traditional machine learning models that are more narrow, focused on a particular task, like identifying fraudulent transactions, they've been trained on that data and they might be good in that specific use case or domain. But I might need a completely separate model to do a different task. Whereas Generative AI, the same foundation model can be used to perform multiple tasks.

So why now, why are we seeing it, big buzzword right now? We're seeing a lot of hype around Generative AI. And it's built on Deep Learning and the theory for Deep Learning really goes back decades. So why are we seeing exponential growth in this space in recent years?

And it didn't come overnight. But some key factors that led to that were advancements in neural network architectures, specifically the introduction of Generative Adversarial Networks or GANs as well as the Transformer architecture back in 2017. We'll talk on that a little bit later here today.

So some advances in the neural network architectures as well as access to massive, massive, massive amounts of data, internet scale data, publicly available data in all different forms - text, images, other forms.

And then also access to massive amounts of specialized compute, a lot of those GPU backed instance types and specialized hardware accelerators that are able to run and train these massive models and perform a lot of that complex math calculations behind the scenes.

And then also investment not just in the compute but willingness to go big, large teams of researchers and that compute infrastructure there.

It's kind of defining Generative AI. We've talked about Artificial Intelligence, Machine Learning, Deep Learning and introduction to Generative AI. We're seeing it being used to improve customer experiences, boost employee productivity and drive business value across a lot of different other use cases.

Like we'll look at again, unlike traditional machine learning, it goes beyond simply recognizing patterns, Generative AI models can be used to create entirely new content, conversations, stories, images, even code. Key word there being generative in nature.

And those models are pre-trained on massive collections of data like we'll look at in just a bit.

Here, how many of y'all have experimented with any form of AI powered coding assistant? Handful. Alright, if you haven't, I encourage you swing by some of our Spotlight Lab rooms here during re:Invent, we got some cool labs, one even on CodeWhisper like I'm going to briefly talk about.

Amazon CodeWhisper is a code completion tool created by Amazon using Generative AI trained on lots of public code, various SDKs that you might use within AWS like Boto3, and uses Generative AI to help developers write code, can automatically complete code blocks of code. You can even just give a comment and have CodeWhisper take that comment and generate a function definition for you.

So it can boost that developer productivity, almost 10x kind of that productivity, but also can be used to help ensure that you're building secure, safe applications as well. So it can help identify problems during code review, things like here where you see with PII data and recommendations surrounding how we might handle that.

And Amazon ran a productivity kind of challenge during preview of CodeWhisper and found that participants were 27% more likely to complete the task successfully. And out of those who did, 57% faster than those not using CodeWhisper.

And so it's been trained again on massive public code repositories, multiple programming languages. Haven't checked it out, encourage you to check it out.

We'll be doing some questions towards the end. So hang on to that and we'll, we'll, we'll tackle it.

And also CodeWhisper, we got some self-paced labs. I encourage you to go check out, play around with yourself.

And we're seeing Generative AI being used to improve customer experience, like I said, boost employee productivity.

And according to Goldman Sachs Research, AI is forecasted to increase global GDP by over $7 trillion in the next 10 years.

So let's look at some examples of how customers are using Generative AI. One of those customer experiences, you know, personalized chat bots, things like that.

How many of them enjoy calling into a call center and being put on hold and maybe being handed off to different agents to get to the solution to your problem? Probably none of us, right.

So we can have, you know, personalized virtual assistant that can help with customer experiences. Maybe you've got questions about a return for an item rather than sitting on hold, we can serve those customers in a self-serving fashion with Generative AI.

And not only does that boost the customer experience, but from a business perspective, it can drastically reduce all that money spent on call centers and handling these customer support kind of tasks in more traditional ways.

And it doesn't necessarily replace, you know, your customer support agents, Generative AI could be used to do some initial triaging, might be integrated with some of your backend systems to pull back information on what items did this customer recently order, problems they've had in the past and such.

And then we can even use it to capture call logs. So maybe you have years worth of call logs from a customer support perspective to be able to make sense of all that data kind of in that raw textual form, summarize maybe what key reoccurring customer problems might be and influence your business to maybe focus on addressing some of those.

And then employee productivity. A lot of times we might have tons of different wikis and internal documents and other sources of information. But we spend so much time just figuring out where that resides and how to get to it and maybe asking other members of our team.

We're seeing advances in Generative AI for intelligent search, kind of question and answer as well. So being able to get that information quickly to decision makers, other pieces of your organization.

We're going to look at some examples of content creation, not just text, text summarization, producing compelling narratives, but could be images as well, audio or visual.

And then we saw code generation with Amazon CodeWhisper as well drastically can help improve that developer experience. I can speak firsthand of this. A lot of times I would, you know, have to break my flow and move out of my IDE and then go search the internet and read lots of other people's problems and try to get to the bottom of what I'm trying to solve.

Whereas CodeWhisper meets developers where they're most productive, directly in that IDE. So you're not jumping around, breaking that workflow.

And then from a creativity perspective, we're seeing a lot of advances here across music, art, images, animations. You got content generation as part of your business, likely something you're already considering or gonna want to consider.

And then business operations as a whole. So document processing at scale, kind of replacing some of the more manual or tedious tasks, being able to do that at scale in a streamlined automated manner, maintenance assistance.

So maybe you've got technicians on a manufacturing floor and they need to diagnose certain problems. We could tap into some of the maybe IoT sensor data of how those equipment is operating as well as standard operating procedures, manuals and such.

So we can quickly get some of that information to people faster, visual inspection.

And even we're seeing synthetic training data creation, which in AI/ML, a lot of times when you're training machine learning models, you need training data, but it's not always readily accessible at scale for us. I'm sort of seeing some advances there as well.

And looking at some industry specific use cases for Generative AI.

In healthcare, we got tools like AWS Healthscribe that can automatically generate clinical notes by analyzing that patient provider conversations during that, HIPAA compliant and all the such there.

And then my fiancé is actually a nurse practitioner. I can speak firsthand to this. She spends an enormous amount of time on this task, clinical notes, putting all that into the system and such.

And it can also streamline documentation processes and maybe even be used to generate personal visit kind of recaps for the patient there, summarizing key things that were discussed. So you're not just scribbling, taking notes while you're talking to your doctor there.

And then from a life sciences industry, we're seeing advances in drug discovery leading to new therapeutics, looking at molecular structures of drugs and kind of completely reimagining the process that we would go through to discover these new molecular compounds and even in protein folding as well, leading to different amino acid sequences and creating new therapeutics and developing, you know, new cures or things within the life sciences domain.

And in financial services, fraud detection mechanisms. So going past just that traditional supervised machine learning type approach that I used earlier, have I got a big labeled data set and here's 10 million examples of legitimate transactions and a couple thousand of fraudulent, we can take that a step further.

And Generative AI can even be used to create synthetic data sets that help speed up the process of identifying those bad actors or criminal rings because they're also kind of always changing their patterns and things like that.

And so we need to be able to keep pace with that, portfolio management, looking not just at, you know, technical factors from a technical analysis perspective, but holistic portfolio management capturing information, maybe in earnings reports, other textual things out there, news, what's the sentiment trending like and such.

And then a couple other industries here, manufacturing, a big one as well. I used kind of the maintenance assistant example earlier of conversational agents maybe that are trained on maintenance manuals and all the standard operating procedures for equipment and then maintenance notes over time, you know, who repaired what and when and maybe even pulling in some streaming type IoT data of health performance of those.

And so it can really be used to optimize some of those automated factory floors, other things like that, process optimization, improve just the efficiency, discover hidden insights there, reduce cost, minimize waste.

Inside of retail, huge, huge industry. We're seeing a lot of usage and Generative AI here.

How many of you ordered something and it came and maybe it just didn't look or feel or fit like it looked on online when you ordered it? Not a pleasant experience. And then you also got to return that item and such.

And so we're seeing things like virtual try-ons and so you can actually kind of see not just picturing yourself, I'm not shirt, but what does it feel like? Or what do these new shoes feel like? And get some more immersive experiences for your customers and tailor that um for them as well, product review, summaries. And we'll look at an example in just a bit of just that maybe you've got, you know, thousands of different products in your, in your inventory and you need to develop a copy for your websites and other things that can create compelling narratives tailored to specific customer interest.

And then in media and entertainment industry content generation, huge, huge. Um you can ask even, you know, scriptwriters, things like that, you can ask generative a i to create a script and give it some sort of tone and persona and things like that. It can help accelerate that creative process. Be almost like a brainstorming assistant, virtual reality. We're seeing a lot of advances in news generation newsletters, articles, blogs. You can actually use generative a i to streamline that tailor newsletters for different uh sectors of your customer base there.

So we went over a handful of use cases at a high level. We got a lot to pack in today and i want to leave some time for questions at the end. We'll work through a couple more here in just a bit. But before we do, i want to speak towards gender v i services on aws. And this isn't an exhaustive list. You're going to see other services. And this week during reinventing, expect a lot of announcements likely for advances in gender to a i and other services.

But if we start at the bottom layer, we need compute, we need compute for our gene to a workloads. And aws offers a wide range of compute, not just those cpu backed instance types, but gp u backed instance types with different hardware accelerators for deep learning. And so you need compute really at two major times in in that life cycle, you need a lot of compute to train the model and then you also need some compute to host and serve that model for inference or making predictions for traditional ml or generating new content with gen a i.

And so when we think about training models, aws has traum chip sets custom built from the ground up for training large neural networks, deep learning type workloads. And we also have inferential chip sets purpose-built from the ground up for ml inference at scale, not just performance but significant cost savings there. And another thing to be cognizant of is the beauty of the cloud, probably all familiar with it. But the elasticity of the cloud, we can grab more compute when we need it, tear it down when we don't. So we don't need to pay for a lot of excess capacity that might be used for certain portions. But we can scale elastic elastically as needed.

And then in the middle amazon sage maker and so customers can use sage maker, sage makers manage service. So it takes away a lot of that operational overhead of managing the underlying compute and other things like that for you can be used by your data scientists. Ml engineers. From an end to end perspective, you can use it for data preprocessing model, training, tuning deployment even onwards to monitoring. We got some other sessions specific to sage maker if you're interested in that and at the top amazon bedrock.

And so this allows customers to access foundation models as a service and we kind of briefly touched on that term, but we're going to unpack what a foundation model is in just a bit. This is extremely great because the cost of training, pre training and building your own foundation model from scratch is no easy endeavor. It costs millions of dollars and maybe months to compute to do so. Why do that from scratch when you can leverage those foundation models as a service within bedrock and bedrock is serve in nature.

So you can just quickly get access to these models via api calls. You can even do fine tuning tailoring and customizing of some of those base models within bedrock and none of that data is shared with third-party model providers or used to improve performance of that base model. Because privacy of that data, especially if you're fine tuning on maybe some corporate data sets, other intellectual property property, you want to make sure you're cognizant of that.

So bedrock lets you customize fine tune models with your own data and you don't have to worry about any of that underlying infrastructure. Sagemaker also provides sagemaker jumpstarts where you can get up and running with a lot of those popular foundation models from amazon like amazon titan, as well as clad a i 21 labs, a handful of others there.

So we've talked about what generative a i is and looked at some, some common use cases and how customers are using generative a i with aws. Now let's walk through an example scenario. Imagine we're a shoe company, we're a big shoe company and we're launching a new line of shoes, some new walking shoes. So let's take a look at how generative a i can be used across various stages in this product release.

The starting off, maybe our product team has done a lot of extensive research on market data about this new shoe that we might be introducing. And in order to move forward, we got to present that to executives, we got to get their sign off and approval and executives time is precious. They're busy and so generative a i can help produce concise and formative summaries, taking that large corpus of data and enable executives to stay informed about what they need to know without spending tons of time reading lengthy reports and trying to digest a lot of that complex information.

So here we got an example of using a prompt for a gen a model providing, you know, our market research report and saying, can you generate a detailed market research report for an executive presentation and then we can see the output there. And from that, we could even ask further questions, customize it. Say can you make it shorter, more concise or go into detail here and then moving along great news, we got approval. We're gonna launch this new line of shoes.

There was aaa market fit for that, but we got a tight timeline. Leadership wants us to to launch these in about a week. So we need some content generation. Our web developers are saying, hey, we need to add this to our catalog, give us some web content and so genitive a i is a game changer when it comes to creating compelled compelling content, tailored descriptions of maybe your products for website copy streamlines that creative process and especially we're looking at just one new shoe line here. But what if you got thousands of products? Tens of thousands, you can see where the benefits here in productivity come into play.

So we give it a prompt of write a product description of a shoe that's good for walking around london and has some of these material attributes. And then the output is we get a much tailored crafting of that narrative that we can use for our website copy. And another example of content generation. So we got our website copy. It's been updated our inventory, but we need to generate some buzz. We need some social media and marketing campaigns. And so genitive a i can be used for the content generation here. And we, and we'll talk about context coming up.

If you notice this prompt, we say write a product announcement for social media based on the previous details. And so during a session with one of these models, we can maintain context or understanding of what's been discussed or asked about previously. And also we can use the same foundation or base model for a lot of these tasks. We don't necessarily need separate unique models like we might with traditional ml and then not just text images as well.

We got that tight timeline timeless protocol. We're launching this in london less than a week and we got some stock photos of this new shoe, but we don't have the time and budget to fly out to london and generate some tailored images and take, you know, professional photo shoots out there in london so not just textbook content generation in the form of images. And this can streamline the pace at which that's developed, reduce your operational cost, a ton.

And maybe our launch is major success in london. And then we want to expand to more global markets. We can use the same and tailor images for markets around the globe or specific segments of your customer base. Then we saw code generation earlier with things like amazon code whisper. So maybe we've launched the shoe line and we've gathered some sentiment analysis, maybe click stream data on websites other things. And we want to make sense of it all. In order to do that, we need to get it to an s3 bucket.

So maybe our developer uses code whisper or intelligent coding assistant. And here we can see just based on the comment, create a python script to upload files to s3, we can actually use code whisper to generate entire functions, blocks of code, which is pretty cool. i use it very commonly. A lot of times i use bodo three sdk with aws and it's handy for just sting out some comments and and and streamlining that process there and then things are going great, but we want to improve that customer experience as well.

And so earlier, we touched on question and answer bots, chat bots a i powered virtual assistant. And so from a customer standpoint, like we talked on earlier, this is awesome. You don't have to sit on hold for 40 minutes and then get transferred to another agent to get the answer to your question. A lot of customers prefer to get that quickly via digital self-service channels. And from a business standpoint, we can also reduce a lot of that operational cost that goes into call center time and the expenses to, to field customers via those channels there.

And so here we could maybe fine tune that model. Like we'll talk about a little bit later on product lines within our business and maybe allow access to some backend databases with order information, customer information and so forth. So we can actually get domain specific query responses. The customer comes in and says, i want some assistance finding the tracking number for my most recent order. And then from that, we can use generative a i as well as integrations to some of that, maybe domain specific data behind the seams to come back and say, hey, saw you ordered the shoe on this date. The delivery date is tomorrow and here's a tracking number for your reference and keep hitting back arrow.

All right. So we've looked at what is generative a i kind of unpack some of that terminology like a iml deep learning and generative a i and where that fits into the equation, we looked at some use cases. Now, let's talk about some of the technical foundations and terminology here and the goal of this really is to give you a better understanding of how gender to a i works and also empower you to have more effective communication with various business or technical stakeholders that might be involved on your gender of a i projects.

And we got a tight timeline today. So unfortunately, i'm not going to be able to give you a deep collegiate lecture here on the topic, but we're gonna go over some key terminology starting with f ms or foundation models. We touched on that earlier. This is kind of what is the base for a lot of our gender of a i and they're pre trained on a large large, large amount of data. And i can't hit home the massive scale of data that these foundation models are trained on. We're talking internet scale data with billions of parameters a lot of times and these foundation models and what makes them unique is unlike traditional ml models where we have very narrow focused use cases and i might need 5, 1015 different models to do different tasks.

The same foundation model can be adapted to a wide range of tasks. And we looked at some of those like content summarization, content generation and it's not just across text modalities, images other modalities as well, code and those intelligent a i assistant or question and answer bots. And so some examples of foundation models, amazon titan model, stable diffusion llama two clad a whole lot of others out there.

And so how are these foundation models created that pre training on that massive scale of data? What if i ask you to learn everything on the internet, this very hypothetical stake here? How would you do it? Maybe you got, let's say 6 billion pages on the web. Maybe you're a speed reader. You don't need any breaks. 60 seconds a page. We're talking about 6 billion hours of time, not going to be possible. But a foundation model can do this in a matter of months

Currently it costs millions of dollars to develop a foundation model from scratch. But the nice thing is it's much faster and cheaper for your developers, your organization to use these pre-trained foundation models rather than training their own unique ones from scratch.

So pre-training this foundation model requires massive amounts of unlabeled data, text, images, audio and large scale training infrastructure. A lot of that compute and that leads to models with billions of parameters like we probably have seen in the headlines today.

And how does all this data get processed? Well, it comes in the form of transformers. Transformers we can think of as almost the powerhouse of generative AI. A very famous paper from 2017 called Attention Is All You Need. If you're looking to dive deep on this topic, it's a specific type of architecture that's used within gen AI models. It's kind of like the brain of the system. It allows to understand and generate complex patterns, languages, images, other datasets. And transformers are able to pay attention to multiple things at once, hence the name of that Attention Is All You Need.

So if I take an example, "I went to the bank last night to get some money for the casino. And now that ship is anchored on the bank." Same word, completely different meanings depending on the context in which they're used. And so transformers are able to retain and use that information, not just of the word, but its context and its larger string of text, the meaning and the position.

And so let's use this example to illustrate how those transformer models work. We asked the gen AI model to complete the sentence "Puppy is a dog as kitten is to..." Probably we all know, but let's use it for the sake of example. Machine learning models don't take kindly to text. They like numbers, a lot of math going on, a lot of matrix operations.

And so before passing that text to our model to process, we need to tokenize that text. So tokens are just parts of that overarching block of text. They can be words, phrases, individual characters, periods and the such. They give a standardization of that input data that makes it easier for models to process.

So we see that input of "A puppy is a dog, as a kitten is to..." and we see an orange there, the tokens being created.

Now, what about those embeddings down below and that encoding? So in generative AI, embeddings are the magic behind language understanding. They transform words, models don't really care about those strings of text like math, but they transform them into meaningful vectors that machines can comprehend.

So word embedding is an N dimensional vector that represents that word. And so here for just example's sake, we got a little bit of a graph and you can see like "cat" and "feline" being kind of closely together whereas "canine" and "puppy" and "young" there. So we can think of word embeddings as the bridge between how we use words and how machines interpret and respond to them.

So after the tokenization, transformer then encodes those tokens into those N dimensional vectors we call embeddings. And then a decoder comes into play. So once all those tokens have been encoded, transformers decoder uses that vector representation to predict the requested output or generate/predict that next token.

And so it has built in mechanisms in that transformer architecture to focus on multiple things at once, different parts of the input, and guess the matching output. And a lot of complex mathematical techniques that are used behind the scenes to evaluate different options like "cat" or "bird" or "bear" or "human" depended on the context there.

And so in our scenario, decoder would associate that "puppy is the dog as kitten is to cat". So the model outputs that prediction and generates the next word in that sentence.

So to reiterate, an important point about transformer models compared to a lot of their predecessors is they're able to parallelize, do a lot of compute in a parallelized fashion and process where it's not just sequential, sequentially, but in parallel all the input tokens at once during the lifecycle there.

And then context, final topic to cover here in some of that foundational knowledge and terminology, context. And that's key, context is the text in that active conversation you're having, it does not persist and it has a maximum size. It's finite. We've seen different approaches to context windows and sliding context windows. They can vary in size but they're finite in nature. But context is key.

Let's use an example here. How does context work? So we give a prompt: "What's the best place in Seattle to visit?" Model comes back with a response: "The Columbia Center offers breathtaking views of the city skyline."

And then we come back with an additional prompt. So multi-turn dialogue, but here in the same session, "Will this be fun for children?" What is "this"? How does our transformer architecture, our model able to understand what "this" refers to in the current context? Are we talking about a specific place? Seattle, just my visit?

And so the model has here correctly decided that Columbia Center is what "this" is referring to. So that's maintaining that context, we can have conversations with these AI models. And as long as we have that same session, we can maintain context.

So customers calling in about a specific order and then we fix that issue and they ask about something else, we don't need to re-establish and figure out what was talked about before.

So that's a little bit of a quick high level overview on some of the key terminology and a little bit of how these gender AI models work.

Let's talk about planning a generative AI project, likely the focus of why a lot of y'all might be here today. And so we're going to break this discussion into four different steps with any project. We need to define the scope, especially important with generative AI, nailing down a scope and prioritizing efforts too. What do we focus on first? How do we allocate our time, funding and resources?

And then there's a handful of models out there, how do we select an appropriate model based on our use case? And again, we don't necessarily need to train that foundation model ourselves from scratch. It's a very costly endeavor, millions of dollars, months of compute. There's a lot of those open source foundation models that we can take and make selections for that might be appropriate for our use case.

And then we can adapt that model, further tailor it using different techniques or relevant data. We'll briefly touch on prompt engineering and fine tuning here today. There's some great sessions at re:Invent that will go much, much deeper into some of these topics. But we can actually take that foundation model and maybe adapt or tailor it to our specific use case.

And then using the model, not just click a button, throw it in production, but have we made sure that we've evaluated risk concerns, ethical concerns? Do we have proper mitigation strategies in place for that? How are we going to, what's our intended usage? How are we going to integrate that into our application?

And even more importantly, feedback loops, extremely important. You want to know how that's operating and maybe use insights over time to further fine tune or tailor that.

So it sounds simple, defining the scope, but it's a critical step in really any project, but especially your generative AI projects, asking yourselves:

  • Do our customers want this? And this might be your external customers, internal customers, your own employees or workforce.
  • What is your target audience?
  • What are your desired outcomes?
  • And how are you going to measure success on those? What's your success criteria? Because we don't have any of that in place at the start of our project, great, we've released this new fancy technology, but do we have any understanding of is it delivering value to our business? Is it operating as we expect it to?

And then:

  • Can our organization do this? What's the level of difficulty, cost, funding, any technical challenges, skillsets that we might require?
  • And then should our organization do this? So what's the potential business value, revenue, return on investment? What are the competitors doing in that space?

And then not all use cases are created equal. It's important to assess both short term and long term impacts that different solutions might provide as well as their implementation timelines. How do you go about prioritizing things? Can you start multiple solutions in parallel? Well ideally that'd be great and maybe you can, maybe you want to prioritize certain ones.

So here we'll consider a couple of examples. Maybe an easy win like tapping into Amazon CodeWhisperer to improve developer productivity with that generative AI powered coding assistant. That's a fairly easy win. Not much you have to do there really, just maybe some training surrounding that and encouragement to your employees. And we'll talk about building a generative AI organization a little bit later. That's a fairly easy one.

Versus other use cases might require some more robust planning and a little bit longer implementation timeline. And we're looking to improve that customer experience with that gen powered chatbot or virtual assistant. And we want to reduce all that call volume to our call centers, a little bit more planning, maybe a longer horizon here.

So once we define that scope, next we need to select a model because again you don't need to create these foundation models from scratch. It's an extremely difficult, costly endeavor. So selecting what model might work. And with all new capabilities kind of being continuously coming out in the gen AI space, it's extremely important to have some form of a framework to evaluate which model and level of customization that might be right for you or your use case.

So we'll use that AI powered virtual system or chatbot example we touched on. Asking ourselves, are these questions from our customers just gonna be general questions or are they gonna be more focused and domain specific? So I got options, I can use one of those pre-trained foundation models out of the box. And that's suitable for general tasks, maybe when time is of the essence and customization is minimal there.

But then for certain use cases, maybe I want to be able to actually have domain specific responses for my customers and my products. And so I can fine tune an existing pre-trained foundation model. And that gives us the flexibility to customize and tailor that model and the outputs specific to your needs.

So this might be necessary when you have more complex datasets or you're looking for more complex kind of question and answer scenarios in your own domain specific data and knowledge there. Now it's going to require a little bit more computational resources to do some of that fine tuning though not nearly the amount of compute that goes into creating that foundation model. So fine tuning is not creating one from scratch, we're just kind of tailoring it, customizing it to meet domain specific needs there.

So a little bit extra compute, some time, some resources and some skillsets to do that. And what approach is right, try to work backwards from the solution you're trying to solve. Where will the data come from? What level of customization might I require?

And so maybe here in this chatbot example, maybe we're looking to just use a pre-trained foundation model, but we can feed it some additional context down the road when we're using it, approaches like RAG or retrieval augmented generation, and then adapting our model a handful of different ways that you can customize and tailor the output depending on your needs and use case there, much more than we're going to cover today.

But let's cover two popular methods, prompt engineering and fine tuning with that chatbot example there. So then prompt engineering, really what this is is just the process of defining and refining your prompts or inputs in order to have that model produce the output more specific to the types that you, that suits your needs.

And so maybe you've got, you know, meeting minutes and you want to summarize or you've got some text from a meeting and you want to summarize that into meeting minutes. I could just out of the box say "Here's a bunch of notes from the meeting. Can you summarize this for meeting minutes?"

But maybe you want some standardization to that, the way you do that in your business or your customers expect. And so with prompt engineering, instead of just providing that single input, we could give a couple of examples as part of that input and say "Here's some notes from a meeting and an example output of some meeting minutes." That could be zero shot, really I just come in and ask what I want to ask one shot. Maybe I give an example or multi-shot there.

So maybe a couple of examples and then providing that as part of our input. So we're not fine tuning the model, we're just kind of doing some tweaks to the input or prompt that we provide that can help produce outputs more suitable to our needs.

Whereas fine tuning, that's we can think of it as a continuation of that pre-training that went into creating that foundation or base model. And it creates a new specialized model, but it requires some compute like we looked at and some very high-quality labeled data. Make sure you have at least 500 examples of high-quality labeled data that meet your needs or whatever use case you're looking to accomplish there, ideally you know much more.

But this would be an example of maybe you've got those call center logs and you want them standardized and you want the structured output of "This is the problem, the resolution, the triage process." And so you could fine tune a model with some 5000, couple 1000 examples that are very, very high quality labeled.

So here's an example of some call center chat and here's the summary and here's how I want that structured and the such. And so fine tuning actually you change, not all the parameters, but some of the parameters of the model to create a new model specific for your use case.

Now, the cost can add up quickly here if you're working with large, large models. So earlier, as part of that model selection process, different models come in different shapes, flavors, we got 10 bill parameters, 70 bill 100 bill. So depending on the complexity you might need, that might be a factor into which choice that you go with and then step four use the model. But this isn't just as simple as click a button and ship it off to production. You need to have a plan in place about that intended usage. And hopefully you got this upfront in step one. When you're defining the scope, what is your target audience, the intended usage? What's your success criteria? How are you going to measure the effectiveness of that and then making sure that you've addressed responsible a i considerations and have a plan in place for continued monitoring.

So you can actually see how the solution is working in the world and monitoring not just to monitor that, but to collect data and use that maybe as a feedback loop, maybe use it for longer term, maybe go back and do some additional fine tuning on a larger data set that you've collected over time. And a lot of the same ml ops principles that you would have with traditional ml models hold true here. All those same best practices and in fact, kind of a term for it fm ops helping ensure that you get your models to production, but also keep them there, keep them aligned with your business goals.

And so do you have a plan to collect feedback from users as well? And this can be extremely important and creation of some of these foundation models. We're seeing reinforcement learning with human feedback or rhf. So it's extremely important to be capturing that and some of that might be, you know, producing two different examples and having users select which one they prefer and that can be used and helped to improve that model over time. And how are you tracking changes to that pre trainin model so that you can then retrain or fine tune using that reinforcement learning with human feedback or other approaches.

So four different steps, we can think of going into planning a generative a i project encourage you. Um biggest thing i could say is scope upfront. Don't just jump into something because it's cool and there's a lot of excitement about it. You need to have a clear well-defined scope before you embark on any of these endeavors. And it's not going to happen overnight too. We talked about prioritizing different use cases and so take steps, build that culture of gender of a i. And we're going to look at that in just a bit coming up.

So we've seen generative a i and its benefits, but it's also important to recognize potential risk or problems that might arise. So for any of these cases, first ask yourself is generative a i appropriate for my problem or a task, not always the right solution. Assuming it is we need to take into account the risk of using it and whether or not those can be mitigated. So some different considerations here across fairness and privacies, fairness, making sure that, you know, we don't have any unintended consequences for certain groups of people, you know, especially in highly regulated industries like lending loan approval, things like that from a regulatory compliance.

And this becomes cons considerably harder than what we would do for fairness with traditional ml models with a traditional ml model to predict credit card fraud detection or loan approval. i can ensure that the training data set i used is relevant and has examples of those different demographics. And so that i'm not introducing bias into that model, whereas with degenerative a i model is a little bit more complex. So something that should definitely be on your radar a little bit harder to define measure and enforce but something you should be at the top of your mind there and then privacy big concern here, especially, um you know, making sure that these models don't leak pertinent information, proprietary information, customer information, things that are a part of its training data. And so this is extremely important.

Nice thing like we looked at earlier with bedrock, any data that you use to fine tune one of those foundation models, it is not shared with third party model providers. It's not used to improve the performance of those base models. And so making sure that we're cognizant of that and then some risk here and some of the mitigation. So with great power comes great responsibility, right? With gen a i being new and evolving at a rapid pace, it's important to be aware of these risks and practice responsible a i.

So let's look at some examples and some mitigation strategies we might employ so toxicity, this is kind of harmful, inflammatory, offensive content that might be produced. And so different mitigations may be curation of your training data to ensure that those examples aren't there. Or guard rail models we're seeing being used. So detecting and filtering out some of that unwanted content and hallucinations we probably heard about too. Not all the assertions or claims that these gendered a i models produce are factually correct. And so you want to ensure that you're teaching users to ensure that they are checking or validating some of these things and cognizant of those risks.

Intellectual property like we saw so concerns about privacy, making sure that data is secure encryption as well as filtering, you can filter generated content to remove protected content. If it's found remove that match. We're seeing plagiarism cheating a lot in the educational spaces as well. Now, there's a lot of benefits to generative a i in education as well, tailored learning assistance and the such. So we need to be able to balance those risks and benefits appropriately and then disruption to the nature of how we do work, this is going to happen. It doesn't necessarily mean jobs are being eliminated. It's going to change the way in which task or work is done, but it's going to also likely create entirely new jobs, some of which we don't even know what they are just yet. And so being uh cognizant of that and talking to your employees about that along the way.

And that's a perfect segue here to our last module. And i'm gonna go through a little quickly here because i want to make sure we got a couple minutes for q and a here at the end. So what does a generative a i organization look like? We're going to talk about strategies for integrating that into your org. How do you build the people, the process, the culture for success and the importance of having some governance structure to that. And then some actions you can take now right after the session.

So starting with preparing your organization, start with your leaders. I want to make sure that you drive leadership alignment and that then trickles down providing resources to your employees for training and other things. So what's your organizational readiness? What's the impact of change to employees and it might change some of your operating models as well? And then moving into your employ, start explain what generative a i is. Education is the strongest tool here. There's fear, there's sometimes just, i don't know, it's too complex. I don't know where to get started. So it's just large machine learning models like we've seen. And so instead of, you know, having those barriers to entry, explain what it is, educate your employees and then address concerns about job security, they might be worried about how things are going to change or reshape address those.

And in fact, foster good communication change that some of those concerns to excitement. Oh, wait now, i don't have to do that, that manual tedious task anymore. Awesome. That's going to free me up to do this. And so new rules, new products, new ways in which we might be doing business and then the potential benefits of generative a i to the company as a whole. So automating those tasks, improving customer engagements and interactions, reducing cost, streamlining processes. We saw earlier with that shoe example, generating entirely new content in the form of text and images, um tailored marketing campaigns, you name it, there's tons and tons of possibilities out there.

And then fourth, encourage your employees to provide feedback, feedback loops are important with everything we do within machine learning, generative a i and all the such. And so encourage your staff to, to share their thoughts or concerns at work and it could be done, you know, through anonymous surveys, focus groups, one on one meetings, start with your leaders and come up with a plan of how some of those communications are going to be made. And then based on the input you receive, what's our plan to address this and then have that feedback loop because this isn't just a one time activity, the pace and rate at which gender of a i is evolving, you're going to want to be able to continuously capture that feedback and then also emphasize the importance of continuous learning.

If you think you know it all, you likely don't. My favorite thing is realizing that there's something i don't learn, get that incitement about continuous learning. Because what's out there today might shift in another year, two years, we're seeing rapid pace of innovation. And if we have that encourage that culture within our organization and we foster that feedback and other things you're going to find oftentimes that maybe a lot of your employees are coming up with new use cases or ideas for you once they have an understanding of what generative a i is and the benefits that it can provide. A lot of times they know the very detailed level details about specific processes or tasks that your organization or business might do.

So, the importance of continuous learning and this is also going to foster maybe more use cases coming down your pipeline. And so how do we organize for success? It doesn't happen overnight. We want to position those teams for success. And we talked about education, continuous learning, communications and collaboration. You might not all necessarily have all the experts required for some of these projects out of the box or right on day one. And so collaborate with experts in the field. They can provide, you know, detailed guidance, unique perspectives and advice. And i encourage you. We got a ton of experts here at reinventing and they're going to be speaking to some of these topics. So check those out and hang out and talk to me a little bit after the session as well and then focus on data quality and availability. You got to have good data with anything you're doing. It's essential to producing accurate and reliable systems and then a governance model need that governance model.

And this isn't something you think about when you deploy it. This is something we think about up front during that scope, a framework for managing the risk and benefits of this new technology. What's the responsibility and accountability of the system. You know, how do we track that over time? Are we sure that we've made ethical responsible a i considerations with fairness and transparency and then closing it out taking some action now.

So hopefully, i know this was aaa lot to pack in to our one hour session but recap and infuse that gender divide thinking start with the use case, start with a couple of use cases. Build on that success, educate your stakeholders experiment too small scale projects, test how it works, build that knowledge and come up with a strategy implement monitor extremely important because implementing it and if you don't have monitoring, you're not going to have any idea if you're actually delivering business value. So how do you determine that success criteria and highlight your successes and your failures? Don't just brush those under the rug, learn from them.

And so closing it out, we got a ton of great resources, visit us at uh the training and certification lounge or uh the challenge lois in the expo. We got tons of team members you can speak with um access to a lot of good material. And uh we've got self-paced labs. I alluded to earlier. We got a code whisper, one help, handful of other gen a i ones check those out. If you're looking to get more hands on and if you don't have a skill builder account, your employees don't have a skill builder account. If you take one thing away, create one, we've got a free seven day trial. You can redeem uh for some of our subscription based content. There's also tons of free digital content out there. Take advantage of that, hopefully enjoy the session. We got just a couple of minutes left. So i'll open up the floor for some q and a and i can also hang out outside the room uh when the next session starts.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值