A leader’s guide on low-effort ways to adopt generative AI

Welcome everyone to re:Invent and thank you for joining us on this session on low effort ways to adopt generative AI.

Now we have heard some great announcements in this space today and yesterday. So in this session, we are going to take a very use case driven approach to simplify this for you.

Um this is a 200 level breakout session. Uh and we are going to take a bunch of use cases. So happy to take any questions after this session.

I'm Akshara Sha. I'm a senior solutions architect. I've been with AWS over four years. I work with our commercial customers on their AI/ML use cases, especially generative AI.

This year, I'm joined today by Prasad who works extensively in this area and also Shri who will be covering productive productivity tools using generative AI.

Um together we have worked with a large number of customers on generative AI. So our hope is to share that experience with you through this session. Thank you.

Now, before we talk about what I'm going to cover in this session, I want to quickly talk about why we chose this topic.

So with a quick show of hands. How many of you are in experimentation phase with generative AI? Awesome and how many of you are in production? Ok.

So in May 2023 Gardner did a survey of around 2500 executive leaders asking them about what stage of generative AI they are in 14% of them said that they are in pilot mode and a very small percentage, 5% of them said that they are in production.

Fast forward to a similar survey which was done in October 2023. And this time, 45% of them said that they are in experimentation phase and a very small 10% said that they are in production.

Now, these numbers not only talk about industry's acknowledgement towards the transformative potential of this technology, it also reflects that a lot is yet to come in this space.

So that is why we have created this session so that we can share some easier methods to adopt generative AI with you.

Now, there are three things that you would absolutely need to be successful in adopting generative AI.

First, you would need to know what this technology is, what are its unique benefits and risks.

Second, you would need to identify and implement use cases that leverage your data as a differentiator.

And lastly, you would need to know what it means to use this technology accurately, fairly and appropriately.

So today, we are going to cover all of these topics. But the focus of this session is going to be on identifying those use cases and covering easier methods to adopt them for additional algorithms, parse data, learn from those patterns and make predictions, generative.

AI not only learn from those patterns and make predictions but is also able to create original and novel content. This could be writing blog posts, write summarizing articles like in this example or creating images or videos or even writing code.

This technology is powered by machine learning models that are trained on large amount of data. Think of all the wikis on the internet, these are called foundation models.

These machine learning models are called foundation models.

Now these foundation models because they are trained on large amount of data, they are capable of producing of doing a wide variety of tasks like text generation summarization, classification q and a and many more.

So at a high level, a text page generative i model or a large language model is good at using natural language processing to analyze the text and infer the next word that is going to come in the come in the sequence of words to do this effectively.

It does it pays attention to the sequence of words and phrases in the input to infer the context, which brings us to the first challenge for this technology. And it is very closely related to an industry wide term used in here, which is called hallucination because these models use probability to predict the next word in the sequence of words, sometimes they produce responses that are generic, inappropriate or even factually incorrect.

Now, this problem is solved by specific techniques which are which we are going to cover in this session.

The next challenge is of a skill gap. The paradigm shift that has led to the adoption of this technology has been relatively recent, which has led to a large number of organizations feeling the need to highlight to hire specialized professionals.

Today, we are also going to cover some of the low techniques to build generative applications.

The last challenge is the cost. Now these are difficult times for many organizations and it is not just a matter of adopting generative AI or trying them, it is also about embedding them in your own business processes.

This is where generative AI in cloud, you can combine it with several other building block services so that you can move into production at lower risk and cost.

A generative AI consumption can be divided into three parts.

First are the model consumers. These are those who directly consume a pre trained foundation model.

Second, our model, our model tuners who are going to use their own data to customize these models for a specific domain or use case.

And lastly are the model producers who pre trainin these models on large amount of data usually from scratch now, these model producers are companies like amazon, anthropic hugging face and many more.

Now, no matter what you are using these, these models can be used for a wide variety of tasks. But today we are going to cover use cases that can be built directly by consuming the models.

So in short, we are going to cover model consumers category.

So now let's learn about how to identify and then implement some of these use cases.

AWS has curated a list of generative i solutions by industry that you can use to identify the standard use cases. The qr code on the screen will take you there.

The next problem is how to identify a priority for these use cases. This is where we recommend that you work backwards from your customer and start with high value and low effort use cases.

So now we have learned what generative e i is how to identify some standard use cases for your industry. But how do you go about identifying use cases that bring value to your business?

This is where ideally you need a playground where not only your data, scientists and engineers but also leaders, analysts and business team members can come and experiment to identify those unique use cases that are applicable to your business, to expedite this process.

You would need, you would need this platform to be no code secured enough to bring your own data and a collaborative space for your teams.

Now this was the same problem that thompson neuters was facing and they solved it by creating a web-based playground for their teams where they can come and experiment with llm enabled tools. They call it open arena. The link on the blog will take you to an article which has the screenshots of this experience. I highly recommend you to check that out.

So now we are saying that we need something like this. But how do you go about building this? No code, enterprise grade workspace um quickly using sage maker canvas, you can build secure, easy and collaborative, no co generative a i workspace.

The service comes with a chart interface to experiment with a wide variety of a i capabilities like summarization, document insights, question and answering on your text. You can also um for generative value use cases. You can experiment and use uh a large variety of publicly and proprietary available models.

This s amazon sage maker canvas also gives you access to data within your own account, which means your data remains private and confidential. And to top it all this has usage based pricing to avoid the licensing fees, which can be a huge cost saver.

Now generated a b i creates significant value in a large variety of use cases. Be it for creativity where you would want to create engaging product experiences, be it for customer experience using intelligent chat bots that talk to your own data, document processing and even code generation.

So we are now coming to the core of session where we will take one persona for each of these use cases and cover an easier way to adopt it with that. I'd like to invite prashant to the stage to cover creativity related use cases.

Thank you, akshara. Hello, everyone. I am really excited to come and talk to you about generative AI use cases and dive into how they can be implemented with low effort using AWS. We will start with creativity, generative AI has revolutionized creative work by enhancing content creation, image generation, and collaboration with human intelligence.

Today, we'll be looking at two specific areas or use cases in content generation and knowledge curation. So let's dive in.

So before we get started with the content generation use case, i wanted to explain how i'll be using the different use cases or explaining them. Let's imagine a retail e commerce company. Let's call it any company. And for each use case, we'll put ourselves in the shoes of a particular persona in that company and what business challenges they face.

Then we will envision how generative AI can solve that particular business challenge. And then finally, we look at low effort ways with aws services of how you can implement that solution sounds fair.

Um so in the first use case of content generation, imagine you are a marketing, content writer, right? Your job is to write multiple product summaries of products that get displayed in your website and, and sold, right?

So normally you have to sit down, write them manually, be creative about it and you have to do it across multiple products.

So how can generative AI help over here?

So let's imagine the marketing content writer is sending this request to a generative AI application to write a creative product summary for a smartwatch product. And he has provided some keywords to use as part of the summary.

Let's see how the generative AI application does pretty good, right? It was able to give you a creative summary as well as use all the key keywords that were asked by it. By the way, before we proceed, just a small caveat, we did present all these questions to real foundation models and the answers are based on the responses and no foundation models were harmed during the making of the simulation.

So just so that everybody is happy.

So let's see how we can implement this with low effort with aws. The easiest way to get started with building a creative content generation solution would be that you could send this task to a foundation model service using an api call securely and be able to select from a variety of foundation models and send back the response that model would send back the response which you could use with aws.

We recommend amazon bedrock to implement the solution in a low effort way amazon bedrock is a fully managed service that offers a choice of high performing foundation models from leading AI companies like AI 21 anthropic cohere stability, a meta and of course, amazon's own models in with a single API call and along with a broad set of capabilities that you need to build generative AI applications simplifying development while maintaining privacy and security.

With amazon bedrock, you can experiment with these different models in a model playground that is available on the console. And so you can tweak your parameters, your prompts and come up with the ideal prompt and then you'll get the equivalent api representation of that prompt that you can plug into your application.

It's very easy to get started. You can easily embed the amazon bedrock API with your existing application like a chat bot and you should keep in mind certain key tips and tricks when you get started, just like any API based application, you should build in caching mechanisms or retry mechanisms to make it a scalable production rate solution.

Also, you should build in post response filters so that any unwanted content that should not make it into the final response can be removed.

So now that you have seen a low effort way to get our marketing content. Writers started with implementing this low effort way to generate creative content. Let's move forward.

One of the key questions that people ask about amazon bedrock or customers ask me about amazon bedrock is the privacy and security and everybody is concerned about it with aws, security and privacy of customer data are top priority from day one and none of your data or prompts are used for service improvement or shared with the model providers.

Neither it is used to train the base models. Customers can configure their virtual private clouds or VPCs to securely connect to these end points. In addition, amazon bedrock helps you build AI applications that support data security and compliance standard including GDPR and HIPAA.

Now that that's been answered, let's move on to the next use case, which is knowledge curation.

So now that our marketing content writer from any company has created the product summaries and potentially stored it in a database, the sales manager in any company now wants to take those product summaries, add additional information like pricing data and promotional data and create a new modified summary and then create a demand generation email from it that he would use to boost sales for his company, right?

So that's the next use case we are looking at now, how would you go about doing it?

So the first task would be asking the gender application to add the pricing and promotional information for the smart for product to the product summary. And as you can see all these sources are are present in different sources and databases in the company.

So by making this request. The knowledge curation application picks all the right information from the different sources and comes up with a creative new product summary which includes the price and the promotional information in orange

Good start. Next, he asks for the demand generation email to be created using that previous modified summary so that he can use that in his email template and use it for his campaign.

So let's see how the generative AI application does. As you can see, it took the summary and created an attractive email which will catch the attention of customers' eyes and help them drive their sales campaign for the smartwatch.

So how do you go about taking all these different sources of data and putting them together easily, quickly and creatively so that it boosts employee productivity as well as improves your overall sales?

One of the key components to implement the solution is Amazon Q, which you may have heard through Adam's keynote presentation yesterday where Amazon Q was announced and it's a new type of generative AI powered assistant that is specifically for work and can be tailored to your business to have conversations, solve problems, generate content and take actions using the data and expertise found in your information repositories, code base and enterprise systems.

So seems in line with our use case, right? So let's see how we can implement this using Amazon Q and create a full solution.

So the sales manager with Amazon Q, your business expert, which is a fully managed end to end no code application can connect to multiple sources of data with built in connectors with over 40 plus services including Salesforce, Google Drive, Microsoft 365, ServiceNow among many others.

So all you have to do is plug in those data sources to Amazon Q using the built in connectors and it will give you actionable insights directly by asking questions to the service after the sources are plugged in.

So the service itself indexes all the data, creates all the semantic similarities and all you have to do is start asking questions.

Once you connect the different sources, it also provides a web based chat interface for you to click and start presenting tasks or asking questions after you have set it up all in the secure manner.

Amazon Q generates answers and insights according to the material and knowledge that you provide through these sources, backed up by references and citations to source documents.

With Amazon Q, the sales manager is finally able to get that creative email summary out through his demand gen email and solve the business use case all without writing a single line of code.

Now, we are really getting any company to start humming with generative AI, right? We have helped them create marketing campaigns. Now we are helping them with sales campaigns. Let's see what else we can help them with using generative AI.

So now that you've looked at creativity, let's move on to customer experience and improving customer experience.

So that is one of the key areas with most customers that I talk to is they have call centers for one and they want to provide customer support of the highest quality to their customers, but in a scalable way without having to spend human effort.

So those are the two areas and then once they're able to provide that level of customer support, they want to analyze how it is going. How are the agents talking to their customers? Are they able to reduce the talk time while answering all the questions and improving the quality of answers? Those things, the companies want to understand.

So let's see how we can help use generative AI to help with improving customer experience.

Let's start with conversational Q&A. Now again, our favorite ecommerce company, AnyCompany has gained some smartwatch customers through our amazing marketing and sales campaigns using generative AI. And now they want to call in or speak to a chat interface and get common questions answered about the smartwatch or need some help troubleshooting, right?

So when they call in, they want to be given those answers based on some knowledge guides that AnyCompany has. So they should be accurate, quick and to the point, but you should not have to use humans to answer those common questions, right?

So how do we go about that?

Suppose the smartwatch customer says "My smart watch is not connecting to my phone." Let's see what the virtual assistant comes up with.

Um you know, it talks about connecting, checking the Bluetooth connections and things like that. Seems like a plausible step to troubleshoot, right?

But if you take a second look, you know, how does the virtual assistant know what is the make and model of the watch? Or is that the first thing they should jump into before providing the steps?

So this is what is known as hallucination. The model used its own knowledge of training of how to connect to a smartwatch and started providing generic responses. However, what we want to do in this is not rely on the model's own knowledge, but AnyCompany's troubleshooting guide and provide that as context to the question to this application so that it can provide relevant answers, right?

So that's what is called hallucination and how we can solve it by providing that relevant context.

Now, you can see by using the troubleshooting guide, the virtual assistant actually starts right at the top by suggesting, "Can you please tell me what is the make and model of your smartwatch?" And then it would be able to lead them down the right path of finding what's the issue with the phone and connecting to the smartwatch.

So let's see how we can implement this with AWS with low effort.

Now, one of the most common techniques to implement what we just talked about is known as retrieval augmented generation or RAG.

How many of you have heard of RAG or using RA in your generative AI applications? So it looks like quite a few hands up. We've got an advanced set of generative AI experts here.

Uh but uh the reason I want to just talk about it is I want to show there is some amount of effort involved in it and then show you an easier way to do it with AWS.

So for those who are not familiar with RAG at a high level, taking our smartwatch customer, when the smartwatch customer asks "My smartwatch is not connecting to my phone", that question is taken by an orchestrator and passed to an intermediate repository to search for relevant answers to that question.

And then the relevant context along with the question is passed to a foundation model to get the response based on that context. That's RAG at a high level.

But let's dig in deeper as to the mechanics of RAG. Generally for RA, the intermediate repository is a vector database. Why vector database? Because vectors for machine learning algorithms, vectors or numbers are better than text for semantic search, they understand numbers and vectors better.

So you want to take your text, convert it into embeddings or vectors and store them in a vector database. So this process is an offline process where you take your textual data, pass it through what we call an embedding model, which takes the text and converts it into vectors or numbers and stores it into this vector database.

Ideally, you want this automated and repeatable because you want to add, keep adding documents as you go along.

Next, once you have your vector database, when you ask a question, the question is actually first passed to the embeddings model by the orchestrator converted into a vector. Then it is used to semantically search for relevant answers against the vector database in step three.

And then finally, the question and the context of the most relevant answer found is passed to the foundation model to get the relevant answer.

So it is quite a bit of work and effort and the orchestrator is bringing it all together.

Now, many customers and people who raised their hands here are using RAG or have been using RAG and it's a fairly easy process to set up. But if you want to do it at scale and in production, you want a little bit more help and an easier way to do it.

So let's see how we can simplify this. Let's start with the standard RAG solution. Let's say you want to implement this virtual assistant based on the conversational Q&A solution using all AWS services.

So let's replace the vector database with Amazon's Vector Engine for Amazon OpenSearch Serverless, which can be your serverless vector database.

Then we can use the TITAN Embeddings model in Amazon Bedrock to create embeddings without having to manage the embeddings model just using APIs. That's where you will feed in your data.

And then finally, you can use one of the foundation models in Amazon Bedrock to generate the conversational answer.

Now, many customers that I have worked with use LangChain, which is an open source framework to create this orchestration of connecting all these different services together. And you can use all of the services that you see over here with LangChain, they are supported by LangChain.

However, LangChain is an open source framework. Not every company and group of people are comfortable using open source solutions in production grade solutions at scale, it's hard to maintain, right?

So how can we simplify it further with AWS services?

And if you listen to Adam's keynote yesterday, Knowledge Bases for Amazon Bedrock was announced.

So with Knowledge Bases for Amazon Bedrock, it is a single end to end managed RAG solution, right?

So with Knowledge Bases for Bedrock, it will create the Vector Engine for you connected to the Embeddings model. In this case, TITAN Embeddings model and to the source of data. In this case in S3.

And then we introduced a new API, the Retrieve and Generate API.

So without having an orchestrator, the Retrieve and Generate API in Knowledge Bases for Amazon Bedrock will not just retrieve the relevant context from the knowledge base that you created, it will also make an API call to the chosen foundation model in Amazon Bedrock and get the final answer for you all with a single API call.

So Retrieve and Generate. So you don't need to have an orchestrator. All you need to do is create the Knowledge Base with the choice of engine.

So you can use Amazon OpenSearch Serverless or you can use Pinecone or you can use Redis Enterprise Edition today for your vector engine, TITAN Embeddings model and S3 for your source of data, you just click with a few clicks, you can create the Knowledge Base and then you can use the Retrieval and Generate API to start getting answers from your RAG based solution right away, fully managed, scalable, secure, ready to go. No code written.

In addition, Knowledge Bases for Amazon Bedrock also helps give citations for the sources from which the data is coming as well as short term memory for a longer conversation that you may have with the RAG chatbot.

So isn't that an easier way to set up your RAG solution with a few clicks and completely managed service?

Now, let's talk about the next use case with Contact Center Intelligence.

Now, normally in a contact center solution, you know, you'll have lots of recordings and those recordings are transcribed using a service like Amazon Transcribe into text, right? And these texts are generally the contact center managers have to go through it and understand where the quality has to be improved and things like that.

Now, with Amazon, there is an existing Post Call Analytics solution where you can use Amazon Transcribe to create your transcripts and then it can give you some additional insights such as sentiment analysis using Amazon Comprehend or it can also give you analytics like how much is the talk time, how much is the quiet time? Things like that.

So that was an existing solution that was already available today. But how can generative AI help you further enhance this solution and then improve the quality of your agents help give faster coaching tips to your agents as well as reduce the talk time and help them maintain compliance.

So let's take a call center manager persona. Now, on top of the transcripts, the call center manager may want to ask your post call analytic solution "Was the agent able to resolve the call?"

And the foundation model powering this service will look through the transcript and based on whether the agent was able to resolve the call or not, give you a quick summary of what the agent did and was the call resolved or not.

So this saves you a lot of time and effort to understand how the agent did.

Now further, the call center manager may also ask, "What could the agent have done better?"

And many of these foundation models because they're trained on vast corpuses of data are already trained on lots of call center transcripts and looking at the specific call center transcript, they are able to provide tips and tricks of how the agent could have done better.

Now, if you could do this across tens of thousands of transcripts quickly, you could provide quick coaching tips to agents and help them improve their overall quality as well as compliance to industry regulations.

Right. So this is an easy way to improve and enhance your productivity of your call center manager as well as improve the quality of your overall contact center service.

So how can we implement this with generative AI? With AWS.

Like I mentioned, there is a single 11 click Post Call Analytics solution already with AWS which includes services like Amazon Transcribe, Amazon Comprehend which gives you a basic analytics experience with post call analysis, post call analysis.

In addition with the generative AI, you can add Amazon Bedrock as an enhancement to this solution and it can provide call summary insights and Q&A to the call center manager in seconds.

Thus improving the overall quality of the agents all of this. Since it involves multiple services and multiple pieces and parts, you can deploy with one click using a CloudFormation template which is created and put out there as an open source solution by AWS so created and maintained by AWS. And it's an open source solution so you can look at all the code and deploy it at one click with a CloudFormation template.

If you want a fully supported solution with AWS, then you can work with one of our enterprise partners to have a fully supported enterprise grade solution of the same, right.

So again, very easy to get started and low effort. If you want more information about the solution, please scan the QR code on the top, right? And it will take you to a deep dive into the solution, how to get started.

Finally, apart from the Post Call Analytics solution, we also have two other solutions with Preca Engagement and a Q&A bot as well as a Live Call Analytics solution. Both of which can be deployed again through one click with a CloudFormation template.

More information about all three solutions are available by scanning this QR code and you can get started by deploying them looking at the code as they are open source and then reaching out to Amazon if you want to help with deploying them using one of our certified partners.

So now with AnyCompany, we have taken them all the way from generating marketing summaries to creating their marketing campaign, then being able to sell their smartwatches using a sales campaign and then improving the customer experience using a chat bot which can answer common questions as well as helping their call center agents to answer more complex questions in a better way using our Post Call Analytics solution.

Hopefully, AnyCompany is in a better place today using all these generative AI solutions. And you have gotten a better idea of how to use generative AI to solve common use cases.

Now, I'll hand it back to Akshara to take us through the next use case of process automation.

Now, in this case, foundation model first decides that I should first get the current stock value of any company. Once I get that value, I'll compare it using the comparison tool. And then based on that decision, I'll send an email to the user.

Now, in order to build this system, developers need to go through a series of resource intensive steps. They need to define the instructions and the orchestration. They need to configure the foundation models. They also need to write custom code to execute these steps to execute these APIs needed. They'll also have to set up the cloud um security policies and manage the underlying infrastructure.

Does that all sound complicated? Um using agents for Amazon Bedrock, you can configure foundation models to automatically break down and orchestrate tasks in just a few clicks all of that without writing any manual code agents. In Amazon Bedrock can also take action to fulfill user requests by executing the API calls. You can define the business logic for these API calls using AWS Lambda, which is our serverless event driven compute service as a fully managed service agents also takes care of managing the complex system integrations and infrastructure processing provisioning for you.

Next, let's consider yourself as a mortgage application reviewer. We are now at the document processing use case, you are working with a large bank as a mortgage application reviewer and in order to process these different applications, one of the simple tasks that you have to do is that you need to get the name and the grass of the of the applicant from the payslips. And in order to build this system, you would today write ML models to sort these documents into specific predefined categories.

Next, you would need to extract data from the relevant relevant documents, paste ups in this case. And finally, you would write some post processing logic to structure them in the format so that they can be consumed by downstream applications.

Now, this process can be simplified using generative AI. For example, you can use LLMs to classify pages within this document without training the model. This also simplifies the process of adding more classes in the future. You can also use LLMs to structure the content in the way you want. You can also enhance this pipeline using Summa to summarize these documents and an and answer questions on on the base of the documents provided.

So in order to build this system, you will need an extraction service, you will need a vector database to hold and manage this data. You would need a service to that gives you access to these foundation models. You would also need an orchestrator to define the workflows and process these documents at scale in parallel.

So in order to reduce the heavy lifting involved in creating these IDP workflows, we have pre stitched these services together in the form of quick and easy CDK deployable workflows and sample notebooks for you. The QR ports on the screen will take you to those solutions with that. I'd like to invite three to the stage to walk us through productivity tools. Thank you.

One of the key benefits of generative AI is to enhance productivity. Now, let's look at some AWS services with generative AI capabilities to enhance employee productivity within an organization. We look at use cases around SQL generation, code generation and report generation use cases.

Consider we have a software development manager and he is responsible to deliver a large pipeline of projects. He has access to a few developers who utilize various integrated development environments. ACA IDs such as Visual Studio C Visual Studio Code AWS Cloud nine, et cetera. As we all know, developers are very critical for any organization. Not only is there a shortage of developers, worldwide developers do not get enough focus time to work on important business problems.

In addition to spending significant portion of their time working on manual and undifferentiated tasks such as man, printing strings, uploading files, unit testing, they also have to keep up with the latest and greatest on various programming languages. Our IT manager here is struggling with similar challenges and is wondering if can help him deliver these pipeline of projects in half the time.

And in order to keep up with the time, pressure developers usually help each other out by sharing code in publicly available forums. But what happens with that is that might lead to some security vulnerabilities to creep in or maybe even violate some open source licensing terms.

What if the developers here had access to an AI code generator which can help them alleviate from all these challenges here. Let's have a look. Amazon Code Whisperer can help this developer and IT manager here.

What is Amazon Code Whisperer? Amazon Code Whisperer is a general purpose machine learning powered AI code generator which can provide code recommendations in real time. For example, as you notice here as soon as a developer types in a prompt saying that parse CSV string of songs and generate the list and Code Whisperer can immediately generate multiple lines of code.

Code Whisperer can scan developer or machine generated code for any security vulnerabilities. It can even tell you how to remediate those security vulnerabilities which it identifies.

Lastly, Code Whisperer can help you code responsibly by ensuring the generated code is original and also identify if it has any reference to any open source code so that either you can decide if you want to use that code or provide appropriate licensing compliance.

So these are some key features with Amazon Code Whisperer. There are many more. For example, you can utilize Amazon Code Whisperer to generate synthetic data. Sometimes you need some synthetic data to test out your use cases. Or you can even ask Code Whisperer to generate code according to your organization guidelines. And not only that even if you want to generate any database objects such as tables, et cetera. Code Whisperer can help you.

Now, you must be wondering, ok, you looked at Code Whisperer, you have heard about it. What about analytics? Can you can uh do we have a pattern like this for uh analytics use cases? Let's have a look.

So in this case, we have an SQL analyst and they are new to the organization, right? And they've been asked to come up with to identify the top five users in Seattle who bought the most number of tickets in 2022. And in order to answer such a question, they will have to write a complex query and they need to understand the underlying table semantics, joints, names, et cetera.

However, since they are new to the organization, they do not understand the underlying table semantics. How can we help this SQL analyst here? Let's check out Amazon Redshift Query Editor can help this SQL analyst here with AWS Q integration with Amazon Redshift Query Editor SQL.

Users of all skill levels can be productive by entering the prompt in plain English. For example, as we notice here, they can enter the prompt, find the top five users in Seattle who bought the most number of tickets in 2022 and Amazon genetic sequel. Within that query can generate the accurate SQL code for them.

This feature utilizes user intent query patterns and underlying table schema metadata to generate accurate SQL recommendations. It can even provide guardrails. So this is very important, right? For example, if you issue a query like really drop as we notice here, if you issue a query such as delete from sales, then immediately it gives you a warning saying that it might violate your underlying or you know, mismanage your underlying um uh uh table uh databases, right?

So you can uh um it gives you a warning so that then you can decide either you want to execute that in your query reader or not use it now that we have looked at the use cases around code generation and SQL generation. You must be wondering is this something which we can do with report generation? Let's have a look.

So in this case, we have a sales manager, right? And they are trying to analyze sales trends in the current scenario. What happens is they rely on BI developers who extract data from different data sources and then create these BI dashboards, right?

However, what if they had an experience wherein they entered a prompt such as monthly sales and profit and the business intelligence tool. In turn calls a large language model. So that generates the BI dashboards. Let's see this in action with Amazon, QuickSight integration with large language model. It makes this experience possible.

For example, what you are looking over here is a sales dashboard. And as soon as they type in a prompt such as monthly sales and profit, it generates the visualization on the flight, they can change the visualization type if they want to use something else or they can even add additional analytics such as forecast and insert into their dashboard.

You can even utilize this feature for visual fine tuning. For example, you can change the x axis y axis and depending upon or if you want to change the chart type, you can just do that using visual props under common use case is calculations more often than not.

What happens is, you know, for example, what you notice over here is this analyst is trying to find out month, over month user change for free for free trials. And as soon as they type that they get the syntax which you can, which they can utilize in their BI dashboards, right?

So this covers pretty much all the use cases which we wanted to cover. Prashant walked us through the creativity and customer experience section. Then Aksha walked us through how you can solve the process automated use cases. And right now, we looked at the productivity tools.

You can even scan the bar code on the right on the bottom there, which takes us to our AWS solution library wherein we have some prebuilt solutions for some use cases which you have covered. And it also has some best practices in the form of guidances to get started on your generative AI journey.

Now I'll hand it over to Akshara who would cover how you can adopt generative AI responsibly and conclude the session. Thank you.

So we are now at last but one of the most important sections of this session, which is how to innovate responsibly with generative AI. Generative AI has a huge potential, but it is crucial to ensure that these AI systems operate within the boundaries of ethics and reliability. And this is important not only for the reputation of the businesses who use them, but it is also important to preserve the trust of end users and customers.

What constitutes responsible AI has been debated globally. But at AWS, we define it to be comprised of six key dimensions, fairness, explainability, robustness, privacy and security, governance, and transparency. And while this is our definition today, we continue to iterate over this as the science and engineering behind responsible AI matures.

At AWS, we are committed to deploying and developing generative AI responsibly. For example, Amazon's Titan foundation models are built to detect and remove harmful content. It can also reject inappropriate content from the user's input and can filter the inappropriate content from the model's output to build generative AI applications that are safe and trustworthy.

You can use Amazon Comprehends service. The Amazon Comprehends service is a natural language processing service which comes with APIs to detect and redact PII data, to detect toxic content and prompt safety classification, to identify the prompts that are not safe altogether.

We want to build generative AI that is safe, trustworthy and helpful with that. I'd like to close this session with last few tips.

Cloud is a great enabler for generative AI. You want your teams working on problem solving and innovation, then on the underlying complexity and the cost of infrastructure.

Next is invest and write data foundations with generative AI. The quality of the data comes over the quantity.

Third, think beyond the technology start by considering your stance on ethics, transparency, data attribution, security and privacy of your AI systems consider the technical sails required and devise a plan to infuse them in your organization.

And lastly start by defining your use cases narrower the better because that way you can define the fairness for this use case and also develop algorithms to enforce that definition.

With that, thank you all for attending this session. Please take a moment to complete this, complete a survey on your mobile app and we'll take all the questions of the, of the stage now. Thank you again.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值