How generative AI features of AWS AppFabric help SaaS app developers

Hi, everyone. My name is Anu. Hello. I'm a Principal Engineer in our top. Come with me. Hey, folks, Sinu Mathew. I'm one of the Principal Product Managers on AppFabric. Thanks for being here.

Um today we are going to discuss, ok, we're going to go through the challenges of walking across multiple apps within our workspace within our workplace. We're also going to talk about generating AI, the challenges of building AI features into apps. We will also go under the hood a bit around how we built it, how it works and introduce the generated AI feature in AWS AppFabric.

Here's the script. I'm sure we are all familiar with you, start your day with a clean desktop and as they are going through, as you're walking in your day, you are opening a new tab, a new window, a new app. And before you know it in a few minutes, you have this sprawl of SaaS apps. I know I use between 7 to 15 apps every day. I know I counted. Um it's how it's a new normal. It's how we work, but it impedes our productivity as you get as you try to get work done, you are toggling between one after the other. You are switching context and looking for information. Um and this hinders productivity for us because we are trying to discover pertinent information, the data that we need by going from one app to the other, trying to copy paste, trying to find the information we need to be productive.

According to research, we spend about two hours every day in switching context in manually searching for information to perform the task that we need to do. And obviously this impedes our ability to get work done. We all love our apps. We all love the various apps that we have, but we also want to be productive. We want the apps to cater to us. We want them, we want to adapt to our, our work habits.

Now, let's talk about JAI, let's talk about the challenges of building gen AI features into an app to solve real customer problems. First, developers need to build an assistant who need to build an AI system are often hindered by the amount of data they have as we know no data, no AI. So when you are building an AI system for your app and you develop, they are often limited to the data that you have. This limits also the outputs of your AI assistant because again, you won't have the context of the user across, across the apps that the user is actually using.

But let's say you solve that problem. Let's say you have the data. Next question is how do you make sense of it? How do you give in 5, 10 different apps with different data sources, different data schemers, different types of data. How do you make sense of it and use it within your app? How do you generate insights from this app? How do you correlate them one with the other?

Soon after you will face the problem of um privacy and data isolation. When you, when you, when you include generative AI in your app, you need to generate data for the user that is pertinent only to that user, give them the data that they have access to. No more, no less. So when you are building generative AI or any kind of feature, really, when you pull data from an app, it's kind of like buying, buying a car, a new car. Once the car leaves the showroom, you're not sure what to get it loses in value. Same thing when you pull data from an app from an outside external source, the data then becomes, you need to maintain it in sync with the source of the data and that's, that's part of the challenge of building.

Awesome. Thank you, Oie. So folks really excited to introduce you to one of the newest features we launched on Monday and it's available in preview today called AWS AppFabric for productivity. And really it's all about how do we help application developers build more personalized app experiences or enrich their existing AI capabilities with the context for multiple applications that we all use. So let's dive into this a little bit further.

AppFabric provides generative AI powered APIs that application developers can use to generate cross app insights and actions. Now, what is insights? What do I, what should I should be focused on? What do I best prepare for? Those are the types of insights that we're starting with now? But insights are only half of the story. What do I do with that insight? So we're also enabling actions. What is the next best action that I can take that allows me to act upon that insight itself, whether that be send an email, schedule a meeting or create a task.

Secondly, because we're using data across all these different applications. AppFabric is taking away all the complexity of building and managing integrations. We're taking it one step further also by normalizing the data all into a common schema as well. So when I think about an email, whether that be in Google or Mail, it's an email in our system because we've done all the hard work to normalize that data across the apps.

Third, one of the core tenants of AppFabric is we want to meet users where they are today. We have no interest of taking users outside of your applications. So we see this as an enablement tool to really empower you to build and embed these capabilities directly into your UI and your applications themselves.

And lastly, when we think about personalization, as Odi mentioned, we're really focused on user level insights. Users have to authorize and select which applications that they want to be used to generate insights. And that allows us also to bring in which specific artifacts, which specific resources across their applications are actually used to help generate the output. And that then gives a lot more confidence in what generative AI does and builds a lot more trust in the responses here.

Now, I've talked a little bit about this but I actually want to show a real demo. So we've been working really closely with Asana to actually build and embed this capability into their application today. So what you'll see at the start of this demo is really the end user authorization step as they connect those individual applications. And then after that, you start to see the insights being generated on both sides of the screen.

So what's really cool in this is Asana actually decided to embed the capabilities directly into their home page where lots of users actually see their daily tasks and projects and now they want to actually bring in new cross app data into their experiences. So what you see here now is I've connected a couple of applications and I'm already starting to populate some cross app insights into my home page. What's really cool is you get a quick summary of what that insight is about a couple sentence with the embedded links that say which resources actually powered this insight or generated this insight. And then you're seeing the ability now for also actions like I mentioned before, you can send an email, you can schedule a meeting, you can create a task. These are just starting points for our actions. But the great thing about this is the LMs are actually building on top of the insights and generating content for this that allows the user to quickly edit that information and execute it and it shows up and, and completes in another application altogether.

The second type of insight you're about to see now on the right hand side is our second API called meeting preparation. We all have upcoming meetings. Sometimes we get pulled into a meeting, we're not even sure what we have to talk about. But now this is your new best friend because it's able to actually help you prepare on what is this meeting about giving you a quick summary about that as well as surfacing relevant information. So in this case, you're seeing the Jira issue being surfaced as part of this meeting preparation output, which now allows the user to have the details at all of their information at their fingertips. Now to best prepare for that meeting.

So let's talk about these two APIs in detail. And it's specifically what we saw in the demo. The first one like I mentioned is called Actionable Insights. It helps me orient and figure out what do I need to pay attention to today? What is upcoming that requires my focus. So in addition to just generating that insight, like I mentioned, it allows you to also have those actions, those actions are super critical. So you know, not only how to what information you have, but then how to act on it afterwards.

The second use case was around meeting preparation or meeting insights. And what's really cool is you'll notice Niro here is one of the other customers that we've been talking to you about some of these ideas. And what they want to do is their users often dump all of their content into this collaborative whiteboard and then use that to organize their thoughts and best prepare for meetings. But with that fabric now they have a single API that can actually say what is this meeting about and bring you all the information directly to you.

What's really nice about this use case is some of the ideas that we're talking about is they also want to extend this into their own AI capabilities. Miral AI assist can actually now take all this information and build a presentation off of it or build an agenda. So now you're thinking about a world where you have all this extra data coming into your application. But now you can use the feature and functionalities of your existing apps that can build on top of it, which is really powerful stuff.

And now I'll hand it back to O to tell us a little bit about how all this works.

Thanks man. So LMs are smart. We all know that you do a chatbot, you chat with an LM and then give you fantastic information. However, if I ask an LM, what should I have for breakfast tomorrow? It will give me a generic breakfast idea. But will it know what is useful for me? What my preferences are? What did I have for breakfast yesterday? What do I like? What I don't like? Will they give me information that is relevant to me? Not likely why? Because the LLM does not have the context. It doesn't have my context, it doesn't know me.

Now there is a new pattern that is emerging due to generated AI called retrieval of mentor generation. It's a process of enriching the pro the prompts to the LLM with the context of the user. So let's take a big example again, if instead of asking the LLM, what I have for breakfast tomorrow is that I ask it. Ok, Ely, I had eggs for breakfast yesterday. These are my preferences. These are the things I like and don't like. These are the things I like to cook. These are my allergies. Now with all that context, what should I have for breakfast tomorrow? And then it will give me because it knows me a bit better. Now it gives me right information and a good suggestion of what makes sense.

So this is what this is all right is it takes a prompt and reaches the prompt based on knowledge sources. And then it just sends the prompt to the LLM to get information that is relevant to the person asking for the information.

And how does our topic use use? Like there are um there are kind of two major parts to RAG in our fabric. Now, I'm lifting the hood a bit to see how it works under the hood. The two major parts of this in the first place, we need to build in data sources, we need to build in knowledge source. So remember what I said in the right? But you're asking you need context to build the information, the context.

Now, in the first place where the user gives permission, we build the context of the user. Now, the second, the second part of it is where the user then asks. Ok, what should I have for breakfast tomorrow? In the first place the user owns their permission they own, which are the apps I want to get insights from. We are trying to enable the user to be productive across all the apps, not just in one. So the user will specify right? I work in Slack. I have email, I have projects in Jira or Sana, et cetera. These are then specified. Ok? For these my sources, these are the things I want you to help me to be productive with.

So the user specifies um this is my soup projects. This is my Slack china, et cetera. Now this build the knowledge sources so that we know the user a bit more. That's the first step. That's the one below where they put brand commission.

Now for the app developer upwards, for the app developers to use AppFabric, then it can integrate with the API that, that Anan described where it can get insights to better help you situate your day. And this is where RAG begins.

So the your your app will call AppFabric and AppFabric. Then takes those knowledge sources, pulls your data temporarily in a safe way and printed it temporarily stores it in an Amazon S3 bucket like, yeah. So at this point, they're pulling the data into S3 encrypt in it. And then, and then we generate what are what are called embeddings. Embeddings are vector representations of text. They are useful for search. So essentially not random keyword search, but for what we call semantic search.

What this means is that using my breakfast example, if i do a search, it's a magic search for breakfast. The question was do you want to know what to have for breakfast when i do a search for breakfast, i i will get from the vector database, things that are similar to breakfast. So i might get things like bacon and eggs. I might get things like cereal things that are semantically and in meaning similar to the query.

So remember in right, the first part is to do a search, get things that are similar and reach the context. And that's what happened here. We generate the embedding for the user populates the open search service so that we can do a search. This person is asking what to have for breakfast tomorrow, we get all the items that in their context around breakfast and then we send it to LLM.

We are integrated with Anthropic uh via Amazon Bedrock, which is a new service for uh for generated AI. So with the user context. And LLM, we can give the user information that is relevant to your work day, relevant to the tasks that they're working on, relevant to the slack messages that they have to their emails such as we can tell like uh like Announced demo, we can tell the user, right? This is what's important for you. The cost of that email message and the slack account and the project that is we have your context on a temporary basis and we can tell you what's important in the meeting coming up. This is what you should read, et cetera. Here is the remaining transcript formatted:

Awesome. Thank you, Oie. So folks really excited to introduce you to one of the newest features we launched on Monday and it's available in preview today called AWS AppFabric for productivity. And really it's all about how do we help application developers build more personalized app experiences or enrich their existing AI capabilities with the context for multiple applications that we all use. So let's dive into this a little bit further.

AppFabric provides generative AI powered APIs that application developers can use to generate cross app insights and actions. Now, what is insights? What do I, what should I should be focused on? What do I best prepare for? Those are the types of insights that we're starting with now? But insights are only half of the story. What do I do with that insight? So we're also enabling actions. What is the next best action that I can take that allows me to act upon that insight itself, whether that be send an email, schedule a meeting or create a task.

Secondly, because we're using data across all these different applications. AppFabric is taking away all the complexity of building and managing integrations. We're taking it one step further also by normalizing the data all into a common schema as well. So when I think about an email, whether that be in Google or Mail, it's an email in our system because we've done all the hard work to normalize that data across the apps.

Third, one of the core tenants of AppFabric is we want to meet users where they are today. We have no interest of taking users outside of your applications. So we see this as an enablement tool to really empower you to build and embed these capabilities directly into your UI and your applications themselves.

And lastly, when we think about personalization, as Odi mentioned, we're really focused on user level insights. Users have to authorize and select which applications that they want to be used to generate insights. And that allows us also to bring in which specific artifacts, which specific resources across their applications are actually used to help generate the output. And that then gives a lot more confidence in what generative AI does and builds a lot more trust in the responses here.

Now, I've talked a little bit about this but I actually want to show a real demo. So we've been working really closely with Asana to actually build and embed this capability into their application today. So what you'll see at the start of this demo is really the end user authorization step as they connect those individual applications. And then after that, you start to see the insights being generated on both sides of the screen.

So what's really cool in this is Asana actually decided to embed the capabilities directly into their home page where lots of users actually see their daily tasks and projects and now they want to actually bring in new cross app data into their experiences. So what you see here now is I've connected a couple of applications and I'm already starting to populate some cross app insights into my home page. What's really cool is you get a quick summary of what that insight is about a couple sentence with the embedded links that say which resources actually powered this insight or generated this insight. And then you're seeing the ability now for also actions like I mentioned before, you can send an email, you can schedule a meeting, you can create a task. These are just starting points for our actions. But the great thing about this is the LMs are actually building on top of the insights and generating content for this that allows the user to quickly edit that information and execute it and it shows up and, and completes in another application altogether.

The second type of insight you're about to see now on the right hand side is our second API called meeting preparation. We all have upcoming meetings. Sometimes we get pulled into a meeting, we're not even sure what we have to talk about. But now this is your new best friend because it's able to actually help you prepare on what is this meeting about giving you a quick summary about that as well as surfacing relevant information. So in this case, you're seeing the Jira issue being surfaced as part of this meeting preparation output, which now allows the user to have the details at all of their information at their fingertips. Now to best prepare for that meeting.

So let's talk about these two APIs in detail. And it's specifically what we saw in the demo. The first one like I mentioned is called Actionable Insights. It helps me orient and figure out what do I need to pay attention to today? What is upcoming that requires my focus. So in addition to just generating that insight, like I mentioned, it allows you to also have those actions, those actions are super critical. So you know, not only how to what information you have, but then how to act on it afterwards.

The second use case was around meeting preparation or meeting insights. And what's really cool is you'll notice Niro here is one of the other customers that we've been talking to you about some of these ideas. And what they want to do is their users often dump all of their content into this collaborative whiteboard and then use that to organize their thoughts and best prepare for meetings. But with that fabric now they have a single API that can actually say what is this meeting about and bring you all the information directly to you.

What's really nice about this use case is some of the ideas that we're talking about is they also want to extend this into their own AI capabilities. Miral AI assist can actually now take all this information and build a presentation off of it or build an agenda. So now you're thinking about a world where you have all this extra data coming into your application. But now you can use the feature and functionalities of your existing apps that can build on top of it, which is really powerful stuff.

And now I'll hand it back to O to tell us a little bit about how all this works.

Thanks man. So LMs are smart. We all know that you do a chatbot, you chat with an LM and then give you fantastic information. However, if I ask an LM, what should I have for breakfast tomorrow? It will give me a generic breakfast idea. But will it know what is useful for me? What my preferences are? What did I have for breakfast yesterday? What do I like? What I don't like? Will they give me information that is relevant to me? Not likely why? Because the LLM does not have the context. It doesn't have my context, it doesn't know me.

Now there is a new pattern that is emerging due to generated AI called retrieval of mentor generation. It's a process of enriching the pro the prompts to the LLM with the context of the user. So let's take a big example again, if instead of asking the LLM, what I have for breakfast tomorrow is that I ask it. Ok, Ely, I had eggs for breakfast yesterday. These are my preferences. These are the things I like and don't like. These are the things I like to cook. These are my allergies. Now with all that context, what should I have for breakfast tomorrow? And then it will give me because it knows me a bit better. Now it gives me right information and a good suggestion of what makes sense.

So this is what this is all right is it takes a prompt and reaches the prompt based on knowledge sources. And then it just sends the prompt to the LLM to get information that is relevant to the person asking for the information.

And how does our topic use use? Like there are um there are kind of two major parts to RAG in our fabric. Now, I'm lifting the hood a bit to see how it works under the hood. The two major parts of this in the first place, we need to build in data sources, we need to build in knowledge source. So remember what I said in the right? But you're asking you need context to build the information, the context.

Now, in the first place where the user gives permission, we build the context of the user. Now, the second, the second part of it is where the user then asks. Ok, what should I have for breakfast tomorrow? In the first place the user owns their permission they own, which are the apps I want to get insights from. We are trying to enable the user to be productive across all the apps, not just in one. So the user will specify right? I work in Slack. I have email, I have projects in Jira or Sana, et cetera. These are then specified. Ok? For these my sources, these are the things I want you to help me to be productive with.

So the user specifies um this is my soup projects. This is my Slack china, et cetera. Now this build the knowledge sources so that we know the user a bit more. That's the first step. That's the one below where they put brand commission.

Now for the app developer upwards, for the app developers to use AppFabric, then it can integrate with the API that, that Anan described where it can get insights to better help you situate your day. And this is where RAG begins.

So the your your app will call AppFabric and AppFabric. Then takes those knowledge sources, pulls your data temporarily in a safe way and printed it temporarily stores it in an Amazon S3 bucket like, yeah. So at this point, they're pulling the data into S3 encrypt in it. And then, and then we generate what are what are called embeddings. Embeddings are vector representations of text. They are useful for search. So essentially not random keyword search, but for what we call semantic search.

What this means is that using my breakfast example, if i do a search, it's a magic search for breakfast. The question was do you want to know what to have for breakfast when i do a search for breakfast, i i will get from the vector database, things that are similar to breakfast. So i might get things like bacon and eggs. I might get things like cereal things that are semantically and in meaning similar to the query.

So remember in right, the first part is to do a search, get things that are similar and reach the context. And that's what happened here. We generate the embedding for the user populates the open search service so that we can do a search. This person is asking what to have for breakfast tomorrow, we get all the items that in their context around breakfast and then we send it to LLM.

We are integrated with Anthropic uh via Amazon Bedrock, which is a new service for uh for generated AI. So with the user context. And LLM, we can give the user information that is relevant to your work day, relevant to the tasks that they're working on, relevant to the slack messages that they have to their emails such as we can tell like uh like Announced demo, we can tell the user, right? This is what's important for you. The cost of that email message and the slack account and the project that is we have your context on a temporary basis and we can tell you what's important in the meeting coming up. This is what you should read, et cetera.

Awesome. So we've spent a little bit about AppFabric now and now just to kind of wrap things up a little bit, it's really easy to get started with this service. All you have to do is first register your application, you get access to the APIs that we've been talking about. Next is really on you to figure out how to actually embed and build these experiences that my users will want. After the users authorize access to their data, then you're going to start seeing those insights. So I do recommend scanning the QR code if you're interested uh has a lot of resources for our tech documentation FAQs as well as the getting started guide to walk you through, set up.

Alright, let's do some quick takeaways and then we'll wrap up this uh this evening. So we have talked a lot about bringing in data into other applications. And what does that actually mean for an app developer? Well, now you've created a much stickier app experience and users are going to have better satisfaction retention and ultimately, that can lead to better top line growth.

Secondly, we actually don't require end customers to have an AWS account to use the service. So they're allowed to actually anyone can, can uh access and use the service after you build it.

Secondly, we've talked about control and flexibility about the UI, you know, your users the best. So we want to enable you to be able to build the right experiences.

Third, when we talk about the uh the resources around dedicated for building and managing integrations, normalizing data AppFabric has now taken away all of that complexity. So it allows you to focus your resources on the right things.

And lastly, when we think about AI and the quality of AI more broadly, I think the data in a single app application is super powerful and it differentiates your business. But from a user perspective, all of their data is across multiple applications. So this service really provides an opportunity to bring all that data together and provide more useful and higher quality insights that are generated by generative AI for those interested in learning more.

We do have a couple more sessions tomorrow. We have a pretty exciting panel tomorrow morning where we have Asana, Miro and Anthropic joining us. And then uh tomorrow afternoon, we're doing a deep dive on this service for about 60 minutes uh to do a little bit more about what, what we launched.

And with that, I just want to say thank you all for joining and sticking around. Uh hope you're inspired to start thinking about how you can.

  • 4
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值