Build your first generative AI application with Amazon Bedrock

All right, good afternoon, everybody. Today is the first day of 2023 AWS re:Invent and welcome to this breakout session, "Build Your First Generative AI Application with Amazon Bedrock".

My name is She Ding, a senior AI/ML specialist solutions architect at AWS. Today together with me is my co-speaker Kasab Kki. Kasab is the principal product manager at Amazon Bedrock team at AWS.

In this session, we hope to give you a good understanding of generative AI fundamentals as well as a comprehensive overview of Amazon Bedrock. So we're gonna get started with a brief introduction to generative AI, followed by a feature walkthrough of Amazon Bedrock together with a demo. So we hope to give you some good understanding about how Amazon Bedrock addresses some of the challenges in building generative AI applications.

Next, we'll talk about some use cases and the customer stories to give you insights about how you can use Amazon Bedrock to build a generative AI application. And the last we're gonna end up this session with some guidance to show you how you can get started on your generative AI journey on AWS.

Now, as you, as you can see, there's a ton of exciting content in this session. So now let's get started.

As all of you are probably aware, there are a lot of things going on in the field of technology. Recently, the transformative and innovative technologies enable us to solve really complex problems and reimagine how we do things. So one of those transformative technologies that gains a lot of traction these days is generative AI.

Why a lot of attention has been given to how consumers are using generative AI? We believe there's an even bigger opportunity in how businesses will use it to deliver amazing experience to their customers and employees. The true power of generative AI goes beyond a search engine or a chatbot. It will transform every aspect of your company and organizations operate.

The field of artificial intelligence has been around for decades. Now you may have this question with so much potential. Why is this technology? The generative AI which has been percolating for decades reaching its tipping point now? Because with the massive proliferation of data, the availability of a highly scalable computer capacity and the advancement of machine learning technology over time. Generative AI is finally taking shape.

So what is exactly generative AI? Generative AI is an AI technology that can produce new original content which is close enough to human generated content for real world use cases such as text summarization, question answering and the image generation. The models powering generative AI are known as foundation models. They are trained on massive amounts of data with billions of parameters.

Developers such as yourself can adapt these models for a wider range of use cases with very little fine tuning needed. Using these models can reduce the time for developing new AI applications to a level that was really possible before.

Now, you may be wondering how generative AI fits within the world of artificial intelligence. So for anyone who's embarking on their generative AI journey for the first time, I know this might be confusing. So let's start with the broadest area.

First, the broadest area is artificial intelligence. Artificial intelligence is any technique that allows computers to mimic human intelligence. This can be through logic if there are statements or machine learning.

Within the scope of artificial intelligence, machine learning is a subset where machines are used to search for patterns in data to build the logic models automatically. Machine learning models develop from shallow models to deeply multi-layered neural networks. And these deeply multi-layered neural networks are called as deep learning models.

So deep learning models perform more complicated tasks like speech and image recognition. Generative AI is a subset of machine learning of deep learning. It is powered by foundation models that are extremely large and can perform most complex tasks.

Now let's dive a little deeper to understand the complexity of generative AI. Traditional formula of machine learning models allow us to take simple inputs like numerical values and map them to simple outputs like predicted values. With the advent of deep learning, we could take complicated inputs like images or videos and map them to relatively simple outputs like identifying an object in an image.

And now with generative AI, we can leverage massive amounts of complex data to capture and present the knowledge in a more advanced way, mapping complicated inputs to complicated outputs like extracting a summary and the key insights from a large document.

So this is how traditional machine learning models, deep learning models and the generative AI models under the scope of artificial intelligence differ from each other from input and output point of view.

Now let's take a look at the differences from a model point of view. Most traditional machine learning models are trained by supervised learning process. So these models use the architectures that require the data labeling and model training to produce one model for one specific task.

However, the foundation models that power generative AI application are driven by a transformer based on neural network architecture. So this architecture enables models to be portrayed a massive amount of unlabeled data by self supervisor learning process.

So these models can be used for a wide variety of generalize tasks and they can be easily adapted for a specific domain or industry by a customization process known as fine tuning with very small amount of label data.

So this difference between the traditional machine learning models and the foundation models is what can save you tons of time and effort.

There are mainly three types of foundation models. First type is text to text models. So these models accepted text as input and output text as well. For example, you can input text from a newspaper article to these models and get a text summary as a response.

Second category of foundation models is text to embedding models. These models accepted text as input and output numerical representations of the input text. So these numerical representations are called embeddings. We're gonna cover embeddings in a little more detail later in this session.

And the third category of foundation models is multimodal foundation models. So these models take text as input but generate output in another data modality such as images.

Now if you are able to follow along so far, so you basically officially understand the basics of generative AI.

Now let's walk you through what has been. Amazon's generated AI journey so far.

Machine learni

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值