Unlocking the industry potential of generative AI

"Hi there. I'm about to go on stage at re:Invent 2023 and need some help.

Hi, Sean. How can I help?

Re:Invent is huge and I want to make my session stand out. Can you help me come up with a video opening?

Of course, how long should the video be?

Let's say around a minute, just enough time to grab the audience's attention. They're probably tired after a full day of learning.

I am always ready to help with videos. Videos always need a good music pairing. What kind of music would you like?

I'd like to set the right mood for my talk. Let's choose something high energy with a good beat.

Great idea. The right level of energy is important for the video. Popular videos like this typically include catchy music, engaging visuals and relevance to the audience. A montage could be a good option.

Yes, a montage. I love that. How about putting a couple of your jokes in there? I think the audience would like your own comedic material.

Shan, tell me more about your talk so I can create the right production script for you.

My talk is all about inspiring people to unlock the potential of generative AI for their industry. I have content I've created in the past full of examples of how far generative AI has come in the last year and how it can apply to all of their businesses.

Got it. I've put together the script and sent it to you. You will have it momentarily.

That's great. One more question - do you want to help present my talk? You are the expert. You've got this. Good luck.

Alright, fair enough.

OK, Alexa, let's count down and start the show with that video. Starting show in 5, 4, 3, 2, 1...

Generative AI is here and what seemed impossible just a year ago suddenly isn't. The question is how. How can you reinvent your industry? How can you reimagine how you work? How can you redefine customer experiences?

Generative AI is unlocking powerful new capabilities that will change your business - surfacing targeted financial advice, speeding drug discovery, and personalizing shopping. It's reducing time to market for manufacturers, accelerating production workflows in media and entertainment, and transforming how vehicles are designed.

It's boosting innovation, enabling new business models, and creating new competitive advantages. We are at an inflection point where how we work and innovate for our customers is being reinvented with generative AI. And it's only the beginning.

Generative AI is poised to transform every aspect of how you and your industry operate. So the question isn't how, it's how soon?

Please welcome the Director of Technology Industries, Shaun Nandi.

Hi everyone. Generative AI's time is now. Thank you all for joining me at my 9th re:Invent. Today you're going to hear from three incredible customers - customers who are using generative AI to reinvent experiences..."

You are not stuck with the model that you started with. And as you heard from Swami this morning, exclusive to Bedrock, we have the Amazon Titan family of models including high performing image, multimodal and text model choices.

Now, at the top of the stack, we are building customers that help our customer that help their customers take advantage of generative AI. Amazon CodeWhisperer streamlines coding and AWS Health improves clinical efficiency in health care by automatically creating notes from doctor patient conversations. That's just a start.

Yesterday, we announced Amazon Q. Amazon Q is a game changer. It's an assistant that's designed for the enterprise. From day one, it will be tailored to your business using the corporate data that you allow it to connect to. Let's watch Matt Wood chat with Q. You can see him not just asking a question but appending his own data and having Q return insights in real time.

Q can be connected to your business data, information and systems so it can have tailored conversations, solve problems, generate content and take actions that are relevant to your business. Q understands Matt's roles and what system he has access to. You can see Q actually take action and create a Jira ticket for Matt to review.

We've designed Amazon Q to meet stringent enterprise security and privacy requirements from the beginning. Now, how do you balance risk? Let's talk about governance. All FM models have that risk of hallucinations. You need to think through your guard rails and how you have experts in the loop. You make sure you pick the right use cases that are low risk and that you make sure the right data is training your models.

One example is if you're using Gen AI to support your finance teams, you want it to bring them back ideas, insights, identify problems but probably not write those final ledger statements, they're getting published to the published for GAAP accounting. When you think about toxicity, you want to make sure that your employees and customers are not seeing content that's harmful. And of course, you have to consider intellectual property rights holders and protect your corporate data.

You should start the first action, implement a responsible AI program. You want to follow the principles of transparency, fairness, accountability, and privacy. There's so many attributes to responsibility program to consider they all require human oversight with AI augmenting human judgment, not replacing it. And you want to bring a cross functional team together to do that.

Now, we were super excited to hear about the launch of Guardrails on Bedrock, Guardrails on Bedrock can be applied to any Foundation model including for fine tune models supported by Bedrock as well as agents, whether you're configuring harmful content filtering following a response policies, defining and disallowing denied topics with short natural language discussions. Once you've done it, once it will apply to every model that you use through Bedrock for that use case and coming soon, you will redact sensitive PII information in your FM responses. It's awesome.

Ok. I'm not going to hold you up any longer going to bring our first customer on stage. They're going to talk to you about finding the right balance between ideation, execution and governance. We're going to hear from Mark, who's the CIO of Gilead Sciences, who's going to talk about how he is applying? Him and his company are applying general of AI across the pharma value chain.

[Mark's transcript formatted for readability]

Thank you, Mark for coming out and sharing that story with us. I was especially listening to how that data foundation that Gilead has built the last several years has had such an impact on accelerating their journey. And I think that's the first time I've heard LLLM Ops discussed definitely a new and emerging area.

Next, we're going to hear from Carrier, one of the largest manufacturing companies in the world. They're going to talk about how they're deploying generative AI across their business. Please welcome my friend Seth Walker, Head of Artificial Intelligence at Carrier.

[Seth's transcript formatted for readability]

So what we've done is we've created four pillars of an AI strategy at Carrier and it's going to essentially encompass two things, right? What does every company, you know, encompass? It's going to be either people or it's going to be tools and processes.

On the people side, you have to enable communication and competency. That means you need to be communicating with your employees, with your customers about the types of ways in which we can leverage this technology. You have to upskill your, and I don't mean just your technical associates. Of course we want to make sure that our data scientists, our engineers, our analysts are all upskilling themselves in such a way that they can start to actually leverage and drive this technology.

But as we know, it doesn't matter how good your models are, it doesn't matter how good your tech is if you don't have business partners that understand how you can leverage it. And so it is key that as part of your communication and competency strategy that you are working with your business to make sure that they understand how to connect your AI strategy and the technology to specific use cases in ways that it can actually drive value for your company.

This is achieved through having a strong delivery team, through the talent pipeline, by having a centralized delivery team that can focus on the innovative use cases, that can focus on creating that innovative technology, you're going to be able to pave the way for them, right? So that other teams throughout the organization that might not have the technical acumen or the ability that you have are able to piggyback off of what you've created in order to drive innovation within their own areas.

And key to all of this, this is all interlocking, are the tools and the processes. So you always have to have that centralized platform, people across the organization, anyone here that has experience with more traditional AI and now even more importantly with generative AI, know that there are a lot of risks and those risks aren't just limited to generative AI.

If someone doesn't know what they're doing it is very easy to build a model that looks fantastic in training but in production fails miserably because of various issues with the data or how you structure the problem. By creating a centralized platform that takes the learnings from your experts that are centralized, that they can then structure and kind of create the guardrails and framework for that, they can then push to your other associates and the rest of the people throughout the company.

You're now driving that innovation because you've created that pathway for them and you're generating the guardrails that ensure their success. And this is all encompassing, right? It's not just the platform for AI but it's data too, as you've heard, and of course surrounding all of it is governance, right?

Governance is of course getting even more important now as we start thinking about generative AI and the particular types of risks that come with generative AI. But by incorporating the governance framework into the centralized platform, ensuring that that governance framework is incorporated with your people and the upskilling of those people, you can ensure that all of this is kind of working together to ensure that you have a centralized framework that can be decentralized across the organization to again enable innovation and mitigate risk.

So we've heard a lot about all the technology and all the things that people are doing with generative AI and I don't want to miss out on the bandwagon. So what have we done when it comes to actually deciding use cases?

So we've created this, you know, kind of enterprise structure that helps us to enable our enterprise strategy, helps us to drive adoption across the organization. But what does that mean for specific use cases? And what does that mean across the long term?

We don't want to just think about what are we going to do tomorrow? Although that's important, but we want to think about where do we want to be as a company in 5 to 10 years? You don't get to be 100 plus year old company without thinking the long term.

And with generative AI, it's simple and it's very closely related to the way that you might think about classical AI. There are foundational capabilities of generative AI and starting from the bottom here thinking of it as a foundation, we have the obvious - content generation, chat, right? The kinds of things that everyone thinks about going to ChatGPT, typing in a question and getting a response.

Then you can kind of iterate on top of that and you have document analysis, you have research and ideation development, analytics all the way to, you know, the gold, the the real long term where we want to be, right? Is AI agents, you know, and you can see how these things kind of built on top of each other, right?

I mean, you have your basic chat in order to do that, you know, in order to have document analysis, you have to have chat. In order to be able to kind of push forward on research and analytics, you need to have the research and ideation piece. In order to have AI agents, you have to have all of it, right?

And so every use case that we're tackling as a company at Carrier is not only geared towards generating specific business value in specific use cases, but it's also geared towards creating these foundational capabilities. So that as we think towards the long term, we've already created that expertise and we paved that pathway of innovation to allow us to get to that long-term objective, right?

And so let's get into an actual use case, one of many that we have deployed so far at our company. So on the surface, this use case actually seems a little kind of basic. And people will ask me, well, this isn't flashy, right? This doesn't look cool. What is this? Right? I see this already.

So what we want to do, right, is we have our supply chain. Everyone always hears about all the issues with supply chain and supply chain is of course, very important aspect of our business for our customers, right? They want to make sure that they're getting their products on time when we say they are and they want to have transparency and visibility into that.

And so, you know, we have tools that our customers can come to to understand, you know, where are, where is my shipment, what's in it? But we wanted to kind of integrate this capability into a chat bot.

Now again, this sounds not impressive, right? We've done that before. All right, we've built chat bots that can answer questions about data. But what's different about this regenerative AI is that in the past, if you wanted to build a chatbot that could query data, you had to pre map all of that out, you had to predefine all those branches, you had to say if the customer wants this then you have to tell the API to call this.

With generative AI and leveraging AWS Bedrock agents, we just have to feed the API and the documentation to the agent and it's able to, I say intuitively but understand exactly how to use that API to get the information that it wants.

So you can see here as the customer interacts with the chatbot that they, we have not preprogrammed any of this. The AI naturally is able to leverage the correct API calls to pull the correct data. And you can see the implications of this. It reduces the overhead to development. What used to take many, many hours can now be done in a much more simple way.

And you can also see how it creates those kinds of foundational capabilities, right? Because we're creating, we have this, this AI kind of data foundation that it needs to be able to query and we want it to take an action, an independent action without having a human, have to preprogrammed and tell it what to do. And this is incredible. I mean, this is the future.

And so we have identified over 50 use cases across the business and I'm sure there are many, many more. And my team, I think right now we have about two use cases in production with another about to go next week. That's just for generative AI, I'm not even touching classical AI and we have more use cases on the way.

I'm hoping to have several more done by the end of the year. And this is going to touch every single aspect of our business. This is truly transformative in the same way that air conditioning in HVAC transformed our society. This is the same way and it's just going to be transforming with native AI. It's truly, truly amazing.

So just to kind of cap it all off. First of all, you want to make sure you're creating that space for innovation while also minimizing risk. This is all about that enterprise AI strategy. When you're thinking from an executive level and you're thinking, how do I structure an organization to ensure that my employees are working towards our corporate goals? This is what you think about, right?

You're thinking about those four pillars of how we can create a structure to our organization, create the tools and processes and the philosophies needed in order to ensure that we're all geared towards the same goal. That includes upskilling our people.

And again, not just your technical people, everyone knows we need to upskill our technical people, but we have to upskill the business too because AI is not always those highly general cases, right? This isn't Skynet, right? You want to identify specific use cases in specific areas where the business sees a gap and there's an area where they can optimize or produce value.

And you want to create a solution that involves AI, if it's the appropriate solution that will help them generate that value and fill that gap.

And then finally, I think the generative AI is really changing this build versus buy equation. In the past, a lot of the overhead with AI was massive, right? If you were talking to some vendors who want to build a product for you, they're building this based on many years of experience with massive amounts of training data that they've created. And it's very tailored with a lot of domain expertise.

But generative AI kind of changes that conversation. Some of the stuff that used to take experts, you know, months in order to build, you can now do in days. The very first application we built with generative AI took two days. It's amazing, truly transformative.

And so with that, I thank everybody. Thank you AWS for having me and have a good day.

  • 9
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
你好!根据题目描述,我们需要帮助 Bessie 计算最小成本逃生计划的数量,并且要对结果进行取模操作。为了解决这个问题,我们可以使用动态规划的方法来求解。 首先,我们定义一个二维数组 dp,其中 dp[i][j] 表示在位置 (i, j) 时的最小成本逃生计划数量。我们可以使用动态规划的思想进行状态转移。 根据题目要求,所有的牛必须聚集在同一个单元格中才能逃生,因此我们可以将问题分解为两个子问题: 1. 选择一个单元格 (i, j),使得牛能够聚集在该单元格中。 2. 计算到达单元格 (i, j) 的最小成本逃生计划数量。 对于第一个子问题,假设我们选择了单元格 (i, j) 作为聚集点,那么该单元格一定有一个入口单元格 (x, y),其中 (x, y) 是 (i, j) 的上方或左方单元格。因此,我们可以通过遍历所有可能的入口单元格来计算第二个子问题的结果。 对于第二个子问题,我们可以使用动态规划的方法进行求解。假设我们已经计算了 dp[x][y] 的结果,那么到达单元格 (i, j) 的最小成本逃生计划数量可以通过以下方式计算: 1. 如果 (i, j) 的上方单元格 (x, y) 存在,那么 dp[i][j] += dp[x][y]。 2. 如果 (i, j) 的左方单元格 (x, y) 存在,那么 dp[i][j] += dp[x][y]。 最后,我们需要遍历所有的单元格,找到其中最小成本逃生计划数量的最小值,并将结果对 10^9+7 取模。 下面是对应的 C++ 代码实现: ```cpp #include <iostream> #include <vector> using namespace std; const int MOD = 1e9 + 7; int countEscapePlans(vector<vector<int>>& gates) { int N = gates.size(); int K = gates[0].size(); vector<vector<int>> dp(N, vector<int>(K, 0)); // 初始化边界条件 dp[0][0] = 1; // 动态规划求解 for (int i = 0; i < N; i++) { for (int j = 0; j < K; j++) { if (i > 0) { dp[i][j] += dp[i - 1][j]; dp[i][j] %= MOD; } if (j > 0) { dp[i][j] += dp[i][j - 1]; dp[i][j] %= MOD; } } } return dp[N - 1][K - 1]; } int main() { int N, K; cin >> N >> K; vector<vector<int>> gates(N, vector<int>(K, 0)); for (int i = 0; i < N; i++) { for (int j = 0; j < K; j++) { cin >> gates[i][j]; } } int result = countEscapePlans(gates); cout << result << endl; return 0; } ``` 这段代码首先读取了输入的矩阵大小,然后读取了矩阵中每个单元格的解锁成本。最后,将调用 `countEscapePlans` 函数计算最小成本逃生计划的数量,并输出结果。 希望这可以帮助到你!如果你有任何其他问题,请随时问我。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

李白的朋友高适

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值