What’s new with AWS AppSync for enterprise API developers

Welcome everybody. My name is Michael Lendo. On stage with me, I have Bre Pillay. He's going to introduce himself in just a moment, but just to make sure you're in the right spot. I see some folks coming in at the moment - perfectly fine. This is going to be "What's New with AWS AppSync for Enterprise API Developers."

A quick show of hands - who's used AppSync before? Ok nice. We got an awesome mix. And if you didn't raise your hand, that's perfectly ok, you're in the right spot because we're gonna be covering a lot of that. We're also gonna be introducing some really cool things such as generative AI. How do you build full stack applications with AppSync and uh and a couple of hidden gems along the way? So it's gonna be a lot of fun.

You can follow us online. I'm pretty much everywhere. I'm a senior developer advocate over at AWS. My partner Brees here is a solutions architect, super smart guy. You're going to hear more from him in just a moment, but I do want to go ahead and start getting into these slides.

Ok, what is AWS AppSync, right? We saw half of the group sort of raise their hands and the other half maybe not so familiar. And that's perfectly fine because we're gonna be covering just that. We're gonna be talking about how it's built for developers. And I do want to emphasize it's built for all developers - full stack, back end, and uh if you are using CDK and you're a DevOps person, like there's a little bit of AppSync for everybody and hopefully you find that as we take some of those notes away from our session today.

Now built for databases, but I'm gonna throw a little, a little asterisk there. Ok? When we talk about enterprise applications and AppSync, it 100% is built for databases. It's built for developers, it's built for devices - I mean this is like the trifecta there, but the key point that you want to take away is that it's built for data sources. AppSync is really great. GraphQL is really great at connecting to various data sources. So I do want to plant that seed that when we talk about databases, it doesn't have to be databases and you'll see what I mean in just a moment.

Now we are gonna cover event driven, API driven application development. It's all the rage these days. If you have a microservice, that's great. But how do you connect that to your other microservices? How do you get your data from one application to another? Again, AppSync is gonna be really great at that as well.

And of course, because this is re:Invent and this is 2023, generative AI is gonna be in the mix. There's really no escaping it this time.

Alright. So let's go ahead and chat a little bit about what is AWS AppSync. How that relates to GraphQL? Really what we're talking about is a fully managed serverless offering from AWS. AppSync is going to give you a way for your front end applications to connect to your back end infrastructure and recall that I mentioned it not just being databases, you can connect to DynamoDB, you can connect to, I don't want to take any spoilers away but RDS instances, we also have HTTP APIs and I do have a couple of demos early on in this session where we are gonna be integrating directly with AWS services without the need for an intermediary Lambda function.

Now, when we talk about being fully managed, when we're talking about it being serverless, really, we're trying to minimize the amount of code that we're writing. Now, I don't know about all of you, but anytime I have to write code is like the world is just that much more dangerous, right? I prefer it when the service writes the code for me. So that way I don't have to manage it. I don't have to do any upkeep with it. Once it's set, it's set and I can trust that it's going to stay that way because I don't need to worry about patching or updating or things of that nature, security and performance comes out of the box.

When we talk about truly serverless applications as you scale up, your application is going to scale up with you as your users, uh have a big event and you need autoscaling, and you have a bunch of requests hitting your application. AppSync is going to scale to meet those needs. Security comes in many forms. We have Cognito in place. Uh we also have your custom OIDC providers there as well and it integrates naturally with those. And again, we're talking about a fully managed dedicated service that is doing all of this. You don't have to wire it up, but there is that option if you wanted to truly custom auth flow as well. So that's really what we're talking about here - AppSync being a fully managed dedicated service.

But how does it relate to GraphQL? Show of hands - who's familiar with GraphQL? More hands than before? Perfect. Yeah, AWS has a fully-managed solution. So you don't have to set up an instance to host your GraphQL server. This is all taken care of for you.

Now, a lot of you, because you've just raised your hands, know that we're talking about a schema driven development, right? You have your types, you have that security in place where you can see exactly how your application is going to be structured because you told us so. You have introspection and custom queries and all that nature.

Now, the resolver though are where it gets interesting. This actually came out last year, but I do wanna make sure I resurface it because of the folks that are new to AppSync in particular. But there used to be this thing from Apache, it was called Velocity Template Language and it wasn't the most friendly thing to write. So a lot of our customers said, you know what, I'm just going to use a Lambda function. Maybe I don't care so much about those cold starts from a Lambda function or maybe I don't mind that. Now I have to manage this code myself, my custom business logic because Lambda function gave me an easy way to get my business logic out of my application.

We listened and what we ended up delivering were JavaScript and TypeScript resolvers, TypeScript through the use of a build runtime. But then you can transpile that down to regular JavaScript, which is what AppSync accepts. So you have these really cool resolvers that end up looking like Lambda functions. And again, you'll see some examples of that but at the end of the day, they're really just a layer, an abstraction over VTL so that you don't have to write that. We manage all of that for you.

Now, with that in mind, I do want to go ahead and hand it over to Bruce. We're going to talk about how it's built for developers.

Awesome. Let me see if this one works. Um so, hey folks, my name is Bre Pillay and I'm the principal product manager for AWS AppSync. Really excited to be here and I'm super excited to tag team this session with Michael.

So we talked about AppSync and we really believe that AppSync is the best way to connect your APIs, to connect your clients to data and to do that, we really want to focus on the developer experience.

So as Michael talked about last year, we released support for JavaScript resolvers and this was a step function level improvement to the developer experience. Really allowed developers to really start implementing business logic to directly access data sources from their resolvers.

We then followed that up this year with TypeScript. And what I mean here is that we made sure that developers could use TypeScript from their local environment and that the compilation result from those TypeScript functions would work with AppSync.

We also introduced support for source maps, so you would get improved logging and I'll show you what that looks like. And we worked on introducing more support for things like array functions and arrow functions so that you could do more in your code - more looping, more transforming, more mapping of your functions. We essentially just wanted to enable everybody to be able to do more directly in their code.

And then last, but not least, we definitely wanted to address this. And this year we released support for unit resolvers. So you can now use JavaScript not only to write your pipeline resolvers and your pipeline functions, but for those situations where you need to do a single data source access, you can use unit resolvers.

So let's talk about resolvers. How does the development process typically work if you're gonna use something like JavaScript resolvers?

Well, in a resolver, you have what we call a request and response handler. The request handler is responsible for telling AppSync how to interact with the data source. You're going to make a request to your data source and then your data source is going to respond back with some data - with a response. That's where the response handler comes in. You write your response handler to handle that data and then to format the data in a shape that's suitable for your client response.

Now, we talked about TypeScript and that's all optional. You can use it if you want to or don't use it, right? The cool thing about using TypeScript and having local support, local development support is that you can actually start using your own modules, your own custom libraries. So JavaScript resolvers do not support importing NPM libraries, but it doesn't stop you from writing your own libraries that you can use over and over again and modularize your code. All you need to do is import your libraries and then use a tool like ESB to bundle your code and then get the result of that bundling step and use that to update your resolver.

So what does that look like? Well, we've been talking about gen AI, we've been talking about Bedrock. So let's take imagine the scenario where you are building your next great gen AI application and you want to present a simple API to your client that exposes some AI capabilities. To do this, you're going to write a resolver. And because you know that you are going to use this type of Bedrock functionality over and over again, you write a utility for it.

So in my resolver, I simply import my invoke function and a transform function that I'm going to use to process my Bedrock request. Then in my request handler, I simply call this invoke function, specified the model that I want to use here - I'm using Entropic code v2 model - and I pass the instruction that I received from my client.

Then into response, I use a custom transform function to transform the response that I get back from Bedrock. So one thing to remember with AI - a lot of time, what you're getting back from these LLMs is just pure text. But you may want to transform that into something that's more suitable for your clients. So you can do this here.

Now, the benefit of this is that you can let AppSync deal with all of the complexities that come with calling an AWS service. So here all I'm doing really is letting AppSync make the request to Bedrock and AppSync can sign my request for me using IAM permissions that I've configured on my data source. So you don't have to worry about configuring all of that in your client. You can do it directly in AppSync.

Then with source maps, you get improved logging experience. So you can use a tool like ESB to bundle your code and include a source map in your resolver file. When AppSync sees that the source map has been included, it will automatically point to the initial location of the source code where something like a line of code was logged - was hit - um something like console.log. And if you run into runtime errors that we catch, we will also tell you specifically where the source code the error occurred.

So including this source map in your resolver actually makes for a much better experience when dealing with CloudWatch logs and when dealing with runtime errors.

The other thing that we released this year and I'm actually interested to see if people know about this - we released a new DynamoDB module for JavaScript resolvers to help developers easily and simply interact with DynamoDB. Just a quick show of hands - anybody used this DynamoDB util? Anybody heard about this? Not a lot of hands. Ok, that's interesting.

Well, the reason we did this is we wanted to make it easier for everybody to interact with DynamoDB. So developers on AWS love DynamoDB and DynamoDB is heavily used with AppSync.

It's one of the most used data sources with Sync. The thing about DynamoDB is that it's really hard to interact with like if you think about the native DynamoDB syntax. So we just wanted to make things easier in your JavaScript resolver.

So take, for example what I'm doing here, I have an update mutation that updates an item in a DynamoDB table. So how do I write a resolver to do this? I have my request and I want to make sure that I can update a to-do with a specific ID. And when I make this update, I want to increment a version attribute that lives in my table.

To do this, I'm not sure how I would do it with the native syntax, but I can easily do it with my DynamoDB utility. I just add a version to my values and use my DynamoDB.operations.increment function to specify that I want to increment the version. When I do the update, it's also really easy to write a condition here. I simply specify that when I make the update, I wanna make sure that the ID that I'm specifying actually exists in my table. This ensures that when I make the update operation, it's only going to update the item if it already exists, right, it's not going to create a new item if it's not there. And then that's it. I call DynamoDB.update, specify the key, specify the values and specify the condition - a lot simpler than doing it in the native way.

We also introduced support for DynamoDB projections. So DynamoDB is a NoSQL database and the items in your DynamoDB tables are composed of any number of attributes. And you can greatly improve the performance of your reader operations by specifying the attributes that you want to get back with your query, scans and get operations. This is super easy to do with DynamoDB DynamoDB module, just pass it an array of projections and your projection, your array of projections is made up of attribute paths.

So here what I'm doing in my request handler, I'm simply getting the selection set list which is an array of the fields that I want to get back as part of my query and I replace this forward slash with a dot. This creates a valid attribute path. Then I simply pass a projection to the get operation and this ensures that I'm only going to get the fields that I need to get back from DynamoDB.

If you haven't tried it, you should. This could greatly enhance the performance of your DynamoDB requests. But the key thing here, you don't have to remember any clunky syntax. It's really easy to work with.

The other thing that I really like about the DynamoDB module is that when you take that together with the JavaScript functionality, it becomes really easy to do things like queries. And when you look at doing things like working with single table design where you have a single table where you store multiple types of data, it becomes really easy to retrieve that information here.

For example, I have a table that's composed of a primary key that has a primary key and a sort key. So it's a, it's a composite key. To fetch the data about this, in the data that I want, I can use my DynamoDB module and just do query, specify that I want to get all of the items that match the primary key and then specify that I want to get all of the items where the secondary key begins with a specific value.

And then when I get that data back in my response, because I'm using JavaScript, it is very easy for me to extract the data that I want. I know that my data is returned in a specific order. So I know that the first item in my list of results is going to be my course. And I know that all the other items are going to be the registration information that is associated with my course. Super easy to do with DynamoDB, the DynamoDB module and JavaScript.

So what we kind of highlighted here is that you can actually do a lot of things with the JavaScript resolver, especially if you're doing accessing things like DynamoDB or you know, trying to make some simple direct access business logic.

So when should you use it? As I mentioned, when you have direct data source access that you wanna do, you wanna interact with DynamoDB, for example, or if you want to access an HTTP endpoint or if you wanna interact with an AWS service. When you want to do simple data transformation. And when you want to implement an authorization step before accessing some more data, the cool thing about this is that if you have a very complex or more complex use case, you can always fall back to using a Lambda function. There's nothing stopping you from doing that, we have great integration with Lambda functions as well. You can do direct Lambda access and you can also do batching with Lambda, which is really efficient. So these are some of the benefits of using JavaScript resolves.

If you go to the AppSync console today, you can start creating your API and select um uh and, and choose to get started from an existing Amazon Aurora database. Simply provide the information about your database, the a n uh and specify the database um that you wanna, that you want AppSync to introspect from within your cluster. In a couple of seconds, AppSync identifies all of the tables that exist in your database and presents you with options um with uh types that are mapped to those tables.

You can rename the types. So a lot of times in existing databases, legacy databases, you'll have tables that have kind of funky names. So if you want to take those tables and create types with different names, you can do that directly as part of the wizard that we provide. You can also choose to exclude specific tables and specific types on your API. So this is another key thing. A lot of times you don't want to expose every single thing that is in your database, right? So you can choose to remove um tables and types that you don't want to expose in your client.

You can preview your schema at any time to see what we're about to generate. And you can choose whether or not you want to create a read only API. So an API that is only composed of queries or if you want to create an API that has queries, mutations and subscriptions. So here you see, you can preview your schema and we show you exactly what we built from what we discovered in your database. And then in a matter of minutes, I think literally when I try this, most of the time it takes me less than a minute minute, you have a fully operational, ready to use API.

So you have something that you can start using right away and you can start adding additional AppSync features to it um to use it in your own uh application. And this is a feature that is available today and that will work with Amazon Aurora servers configured with the Data API. And I'll say that coming soon, you will be able to use this functionality with additional Amazon Aurora configurations like Aurora Cervus v2 and some provision versions of Aurora with Postgres.

So how does this work? Well, what I showed you is what happens in the console, but this is something that you can do outside of the console as well. We essentially introduced two new APIs. One is called StartDataDataSourceIntrospection. This is the API that you call um on AppSync. You specify your database information and AppSync will connect to your database and find out all of the information about your types, discover primary keys, discover indexes, discover ens and return all of that information back to you.

When you call to GetDataSourceIntrospection, this will return you all of the type information and it will even provide you sdl for all of the types that we discovered. So you can take that information right away and create a schema of yourself a console. We go one step further and create all of the resolver for your queries and mutations.

Now I talked about using AppSync to abstract your back end data logic but this is not always the case, right? There's a lot of situations out there in the web like I put here where you have um applications or clients that are directly accessing um databases um which is pretty dangerous when you think about it. Is everybody here. Um anybody here familiar with uh little bobby tables. Show of hands. Anybody heard of him? Little bobby tables? Y'all should, y'all should look up little bobby tables. Um it's an xkcd joke. Look up little bobby tables. It's about SQL injection. But SQL injection is a real sequel injection. Attacks are a real thing and they happen when you allow dynamic data and dynamic values to modify statically provided sequel statements and make them work in ways that were not intended.

So the example here is that I've got a very simple, you know, kind of straightforward query that says select um all from users where user id equals the parameter and it seems like this is totally fine. But if I allow any dynamic values to be passed uh into this statement, this can actually change the intent of my initial um um sql statement and cause for things that i did not intend to happen to happen, like dropping my student stable. There's something we want to avoid.

So JavaScript community, um what we've seen happen over the last couple of years is using JavaScript uh tag templates to address some of this issue. And this is something that we are not now making available in your JavaScript resolvers as well. You can use what we call a sql t a template to write a static sql statement. And we will use the result of that sequel stack, take template to create a sequel fragment that you can use to create a sequel statement. The great thing about this is that this is actually safe because it is a static statement that cannot be modified at run time. The only way you can pass dynamic values to the statement is by using a template expression. And we will automatically take the values that you pass to that expression and send it to the database as a parameter using a placeholder.

So what is happening here is that when this is run at run time, the value that is passed by ctctx.org dot lab is actually passed as a per uh replaced by a placeholder and sent to the database as a parameter. So we are actually going to use the database engine to do the heavy lifting here and properly handle dynamic values. So this allows you to write your sequel statement in a way that feels natural and to write them directly in your resolver.

Now, a lot of times um this is great, you can write your static statements, but we can do even better. We can use new utilities that we provide as part of our rds module that we introduced earlier this week. To then to programmatically write sql statements. So a lot of times we just want to interact with our database and get data from our database. And you can do this by using some of the functions that we now provide.

So the example here is that um we're using a select function in our typescript resolver to fetch uh rows of data from our messages table. And so what is going to happen here is that i can specify my table, specify my columns. i can specify my, we condition my limit my offset and how i want to order my results. And this automatically creates the statement that is going to be sent to my oral database. We automatically take all of your entities, your column entities, table, entities, schema entities and properly escape and quote them. And then we take all of the dynamic values and replace them by placeholders, we replace them with placeholders. And again, we send dynamic values directly to the database and let the database engine do what it does best.

So even if you are not a sequel expert, this makes things easier and um easier to work with. You can do the same thing for an insert here. I'm writing uh an insert statement and i'm going to create a post grad post graph statement. So i specified the table. It's i want to insert something in my user's table, specify my values. I can get my values directly from my graph ql operation because again, we are going to take all of that data and send it and um handle it properly. And because i'm using postgres, i can use the returning statement and say that i want to get back my id a name and create it at.

And so the same thing happens here, we take all of the properties of the objects and we map the property names to the columns, we escape them and quote them properly and we take the values and we pass them, we place them with placeholders. So now i guess i, i think you kind of see how this works. We can do the same thing with an update. Uh and in this example, i'm working with a mysql database. So i'm writing a mys ql statement, but the same thing applies.

Wanna get do an update in my messages table. I take the values from my input and i specify my condition that i want to update a role where the id equals an idea that was passed into input. And because i'm working with my sequel, my sequel doesn't support the returning statement, but when you are working with AppSync and accessing an oral database, uh we let you send up to two statements at the same time.

So with MySQL, I can do my update and at the same time specify that i want to get back the role that was just updated. So you don't have to make two calls from your clients. You don't have to use a pipeline resolver. We will do it directly in a single resolver and then the same thing with delete.

And as you can see here, when i use to create my sql statement function, i can mix and match whether i'm using a tech template. Whether i'm using something that was constructed with my rds utilities, i can even pass it a raw string so you can use whichever method you want to create your statement.

Now, this is functionality that we just released this week. So it's brand new. Um definitely looking forward to uh folks trying this out. Let us know what you think. Give us some feedback. I encourage you to write the blog where we go to a lot more detail about what the feature is about how it works. Ok? Gonna hand it over to Michael to talk about ed.

ab sync again through the use of having queries and mutations. those are going to be your http protocols. they also use subscriptions which are going to be your pure web socket protocols, your wss.

there isn't really too much to go ahead and say when it comes to event driven applications, i'm much more of a visual learner myself. uh now bear with me because i am doing the dance with the multiple screens. but i do want to go ahead and showcase what happens when we start to integrate event driven applications.

so if the last dance was fun, i'm gonna bring up two screens this time and see where we can do that at. uh uh all that actually worked great. and so i have that one and then i have one more. i my little fella get in there. ok? so i have a chat application.

now again, i'm a visual learner. i love building out these applications to really showcase what's happening here. but this is a chat application with a twist. now, if you have the qr code, you can absolutely build this and try it out yourself. but i have two separate users here.

in fact, if i come over here, i have this user myself at focus order and then i or i'm sorry, this user can send a message to this person. and then over here you have another user and they can send a message to this gmail account. same person except in this case, they don't speak the same language.

on the right hand side, what we're going to have is this person typing in english and the other person is then going to receive that link, that message in their preferred language. so imagine any popular chat application, it's global, it's distributed. your team is all over the world except you don't all necessarily have to speak the same language.

let's test this out now to give you a showcase of what's happening here. we have absent, going to amazon translate. amazon translate is then going to dod b and when that change happens, a subscription is invoked two separate clients here. uh this person is uh spanish. i believe i'm gonna say, well, is that, oh, let me set up a conversation between these two folks.

so we're gonna call this. how about reinventing? seems fine great. it just happened here and this person doesn't quite see it yet. so i'm gonna refresh and they can pull that data from dyna o db. and now they have this reinvented channel that they can listen in on. great.

so there's re invent, there's re invent on this side. oh my goodness. this is gonna get fun with my password manager and we're going to start talking to each other. i'm going to select this user. this person can select this one and now we can start messaging when i say o on this side. uh presumably we get hello and look how fast that is right. we have a subscription being fired off. this is all real time data happening. i swear as a front end developer like real time data is my favorite thing to do. uh but then on this side a little bit tricky uh i guess saying, you know, hello, how are you uh question mark and as soon as i click send, watch how fast this is boom and then boom translated in real time, right? chat application. thank you. thank you. thank you. um and you can use this. this is applicable today. we have customers that are taking advantage of this currently. and it's just amazing to me that this is something that appeals to the masses. i chose to say english and spanish but amazon translate supports 22 different languages. so you can imagine the implications there, right? anybody can go ahead and chat with this. uh i guess assuming that they're part of those 22 languages. but um let me go ahead and switch things over. we have one more demo, i'm a demo person myself. and so michael, did you?

this works by sending to request to appsync and then pushes out the subscription to all of the connected clients currently it works with. so you notice that these two users when they sign up, i guess i should make mention of how that's happening. it's pretty cool. thank you. thank you.

so when a user signs up here. this is through cognito and this really showcases the integration between cognito and appsync. uh when a user signs up, this is using amplify javascript libraries, they provide their information here. but you'll notice that there's this bottom portion that's like, hey, what is your preferred language? english or spanish? and again, that's more laziness on my part, i could have included more languages here. but once they do that, a lambda function gets fired off that lambda function populates a user download db table that appsync can then consume from. once they do that. really, it's just a matter of taking that preferred language and sending that on the request, right? and amazon translate is going to intercept that as well as the text content and provide a real time notification, storing the contents in dynamo db as well and like as much as i just blabbered on about what the process is. you saw how fast it actually is when you do it in real life. ok?

so we have the expense tracker and that use amazon textract and then we just showcase amazon translate. so there's kind of a theme here, right? uh using generative a i or using a i in general with your applications is cool. that's the theme. but when we start to showcase new services, things like amazon bedrock bedrock, you could again start to do really cool applications.

now, i'm not gonna go into the full bedrock and ab sync discussion here because i am also speaking at uh tomorrow uh fwm 311. if you want to get the full spiel on a sync and generative a i, i have more demos to show off in that session. but let's go ahead and get sort of the, the basics out of the way, using ab sync, you can connect directly to bedrock. there's no, again, lambda function involved here. ab sync can call internal aw services. and as you've seen a couple of times now it's really fast.

now, i don't want to say that there's never a use case for lambda because that would dismiss most of my session tomorrow where we talk about asynchronously invoking lambda functions. but for a lot of use cases, most of them, it's ok to work within the constraints of needing that request to come back within a certain amount of time. now, that's a synchronous request, right? ab sync going to lambda or i'm sorry, absynth going to amazon bedrock. and then as i alluded to, there's also the ability to do absynth to lambda function and that lambda function then talks to bedrock.

now, the key part here is that when you receive that response for amazon bedrock, now you want to do that dancing and now you want to trigger the mutation which and then invokes a subscription and now your clients get real time data, think of a scenario where you saw on that chat application. it happened really fast, right? but we've all used generative a i, at least most of us i hope by this point and you know that sometimes it can take a while.

now, oftentimes the solution to that is that you have to store that information inside of a database. basically fire this off asynchronously, right. store it in a database and then pull the database. hey, is it done? hey, is it done? hey, is it done with ab sync? you don't have to do that because you can get a real time subscription whenever it's done.

the key part is that when ab sync invokes a lambda function you want to do so asynchronously, lambda functions can be invoked two types of ways. one that says, hey, i'm going to fire you off. give me the response within a certain amount of time. the other way is sort of fire and forget i'm going to invoke this limb to function. i don't really care what it does. but in our case, it can go ahead and return a response immediately, do that backend processing and then respond whenever it's ready. doing its thing.

think of a bedtime story. that's gonna be my demo tomorrow where we have absynth calling a lambda function. but now we don't want just text back. now we want an image for the bedtime story. now we want audio because my kids love audio and then we also want text for the story as well. that's going to take a while. i can't work within the constraints of appsync or any other api provider, right? because the web browser would oftentimes restrain us from doing so i have to have an asynchronous invocation.

here is a tried and true architecture of how you would do that. now, we can get a little bit more involved here. think of scenarios where you do have that chat application, streaming a response back, but you have a conversation in place. i'm talking to an agent or some kind of uh language model where i want it to remember what i said previously.

great. you've seen a couple of examples now in these demos of how we can inject d db in the middle of an operation here. so it really is just a matter of saying i'm going to send you a new message and then from there, you can pull the existing conversation history similar to a chat application really, i don't know who this handsome devil is. but if you do want to see a video that talks about exactly how to integrate app sync with bedrock, uh there just happens to be a video out there where you can click on that qr code and it would make me feel at least really nice inside if you just lift it up your phone and pretended to scan it because that's me.

but with that said, i do wanna head off to my final demo here. uh this is going to be using bedrock. now, this is going to be a direct integration absent, talking directly to bedrock, but it is near and dear to my heart because it got me out of the doghouse with my wife once upon a time. perfect. and then we'll go here.

now, my wife loves word searches. she's, that's, that's her thing. i'm a sudoku guy. she's a word search woman. uh, but she does them too fast for my pocketbook to keep up. i always gotta buy her word searches. like, you know what, i'm just gonna build you an application where you can provide your own words

And that was great for a while. But then she got bored because she was typing in the words herself and it wasn't all that fun. It felt more like work. She said, ok, we can do better, right?

So I went up and I should probably increase this just a little bit. I said you can print off your own word searches. And again, this is a really fun activity to do with your family and you have that QR code to scan it, but we can give this word search a title. You know, we can call this, you know, re invent. Uh let's get, let's get crazy. Let's do like 13 by 13.

Now, if I add in word searches items here, you know, we can generate one, I can say this is going to be friends and then we can have appsync, uh other things in here, bedrock. We don't have to get too crazy with it, but I could generate this grid.

Now when I generate uh we get uh oh length of undefined, my demos were going so good until then. No. Let's get this working though. Let's go to bring this back. Um what did I do? Let's give it a theme. Uh we'll say re invent one more time. Not 134 please. No. Um and then separated by comments. So let's say ab sync and then we could say bedrock. I don't know what I did last time to screw things up, but let's try this. Yay. Ok.

Uh you get an amazon gift card if you find a sync. No, I'm kidding. I don't have that power but it's somewhere in here, right? Uh and I can verify that the words are down here and then for my wife's sake, I just go and I, I print them off and it just shows the word bank and uh in the words, it's pretty fun.

However, when I save those great details, I can save those words for later on into dynamo db. And that's, that's just fun to do. Now, when any time she wants to update the words, she can only, she can add more to it, right? She can put 20 words in there and she doesn't have to put all 20 next time. Great. But how do we make it so that it can generate the words for us?

Well, introducing today, this new feature that just launched, it was me adding an input button and this time we're going to get creative. Now I have the prompt already in place. The prompt is saying give me 10 random words or 10 words themed based off of whatever the user puts inside of here. So this is tricky because I don't know what a I is gonna say.

Uh but let's go with uh how about serverless? Uh if I can spell server list? Uh oh, this is a sentence, right? So servius aws offerings with aws, appsync and bedrock. Let's see what that gives us back. Uh let's generate these words again. Uh this is going to head off to bedrock, presumably fire things off and there we go, right?

Absinth lambda done with db. And without having to do anything else, I can generate this grid and it's been updated with all of these new words right here at the bottom. I'll pause so you can try to find whatever words you're looking for. Yeah. Round of applause. That was pretty good. Ok. But it's a good time.

And as I mentioned, you have the code to do all of this stuff and to run it. Uh in addition to the blog post that is coming out showing other architectures and other solutions that way you can take this and again, make your own wives or, or significant others happy as well. I'm just trying to spread the love here. Uh but with that, I'm gonna switch things back over to bree.

Bree is gonna talk about how ab sync is built for enterprise. Yeah. The, the one thing I want to say about the, the demo and the pattern that, that michael showed is we're starting to see real customers use this pattern. First of all, i am a real customer. Yeah. Well, that's why I'm impressed by the demo. I was like, i work in this space all the time and I'm always impressed by your demos and this was a pretty good one. But we're starting to see um customers use this pattern, appsync, uh invoking bedrock or appsync using lambda and bedrock for those streaming um scenarios. So that's, that's pretty amazing. Uh it's still like early days in this space, but it's really amazing to see some of those use cases and implementations already emerge. Ok?

So to wrap things up, like we talked about a lot of things, right? Um and these, the, the the the capabilities that we went over, all things are really crucial for developers um that want to build um re reliable, secure and sturdy um api s for their applications in the enterprise space. There are additional sets of requirement um as well and these typically have to do with security scale observable.

So this year, we wanted to make sure that we could address some of the main requirements that we have been seeing from our enterprise customers. And so we released a set of features, private api s merged api s and we also released recently improved metrics for subscriptions.

So if you're not familiar with the private api feature, it's fairly straightforward. The private api feature allows you to create an api that is only accessible from within your vpc. This is actually a really important thing for a lot of customers that work in um in environments where everything has to happen in their vpc or everything has to go through their vpc. I know a lot of us work in a serverless way, right? And we don't want to see a vpc. We want, we don't want to touch a vpc but for a lot of customers, this is a real thing and is really important. So we released this feature this year. And if you've used aws private link, you already know how to use this feature simply when you create your api specify that you want it to be private and it will be available to use in your vpc only.

We then released merged dp i, which is our solution for a common problem in the graphic aerospace, which is the federation problem. So the merge api solution is a solution that really allows customers that have multiple source a ps to work in an independent fashion. But then to bring all of their source api s together to create a single merged api that they can then make available to their clients.

So it's absint take on solving the federation, the gateway problem that exists into graph qr space. And it enables teams to own parts of their graph and to develop their graphs, their um their api s independently and at their own space at, at their own pace, they can even use their own tools work completely differently. But then they come together to form this merge api uh come together to combine like voltron i i like to say reference from a guy born in the eighties.

Um so that is what, what it does. And our approach is a build time approach. So merge api works at build time, you create this merged api at build time, not at runtime. The difference here is that we resolve all of the conflicts once when you create the api and then at at run time when we actually process requests, there's no, there's no runtime routing that has to happen, right? We can call your resolver directly and we can call your data source directly. There are not, there is not multiple hops that needs to be taken. So it makes for a much simpler configuration which is simpler to understand and simpler to manage.

And it also allows us to have full support for subscriptions right out of the box. This is something that is not always possible with a runtime approach. We're going to have a session. I think maybe tomorrow that goes really deep into merge dp i, so i definitely encourage you to check that out and i'll have some information about that and the slide debts coming up.

But merge ep i essentially allows um two different accounts can be in the same account, can be in separate accounts to build their own api s to build their own, to provide their own schemas. And then they come together combine to schemas. You can combine types, you can combine operations, you can use directives to specify which type and which api has a priority over other types. So there are a lot of things that we do to actually work with things like conflicts, resolve these conflicts at build time. You can use an automated process or an approval process to merge your a ps. And like i said, there's no dynamic routing. Everything happens directly in one hop with merged api.

I do want to mention that while we are looking at our merged api solution to address this, we are also working with the open source community. We are working on the graph ql fusion spec that was started by the chili cream team and the guild. So they are working on an open source spec to address the federation problem, which i think is a really good initiative because there's been a lot of um you know, different companies and different entities working in their own corner trying to make this happen last but not least we've got just a couple of seconds left.

We did introduce some new metrics for real time subscriptions earlier maybe two weeks ago. So you have now more visibility into your subscription. api this is a precursor for something that we are going to do next year. We are going to to increase the default quota for your request rate for your tokens.

So today, if you use absint, the default request rate per second for your token is 2000 next year, that request rate will increase to 10,000. So you will be able to use a sync at an even larger scale without requesting a quota increase. And we are also going to release some additional quotas just to give you more visibility into your subscription engine uh for the vast majority of our customers, they will not, they will not see an impact. Most of our customers will fall underneath those quotas. But for customers that really want to scale up their a ps to a large scale, having these new metrics and having these new quotas will allow them to have better control over their a ps.

So we talked about a sync being built for developers, databases, ed aj a i enterprise. Um we essentially are looking for developers and customers to be able to do more easily with their appsync api s. The last thing i want to mention as i said, is we have a lot of additional sessions going on today and tomorrow.

So, um you know, please feel free to uh check out these sessions if you want to dive into some of these. If you really want to go deep into merged api private epis, uh what michael and our teams are doing with the gen a i stuff as well. I think this is all we had.

Um I want to thank you for joining the session. I want to thank you for your interest in appsync. If you've used appsync. Thank you very much. If you haven't used appsync, i think now is a great opportunity to check it out. Um have a good rest of your day and a good rest of re invent. We'll be around for questions. Thank you all. Thank you.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值