Getting started building serverless event-driven applications

Hello, everybody. Welcome to SVS 205. I hope that on day four of re:Invent, you still have a little bit of space in your brains to take in something new because I know it's been a really packed and awesome re:Invent this year. So I'm super excited that you chose to join us today.

My name is Emily Shea. I lead our Application Integration Go-To-Market team here at AWS. And I'm gonna be taking us through my talk on getting started building event driven applications. I'm also super excited this year to be joined by one of my colleagues, Narin Gaa, who's going to close us out with an awesome live demo of Application Composer and some of the really cool features that are available in that.

So I'm super excited to jump into it. Maybe just a little bit of preface - I love giving this talk because it is very much drawn on my own personal experience. So you're gonna hear a lot about kind of the trial and error that I faced as I was learning about services. And hopefully I can save you from a little bit of that and give you some of the best practices to get you off to a great start.

This is a really good talk for somebody that is brand new to building applications, starting to get into developing and getting into the cloud and wants to learn about services and really AWS native ways to build applications. It's also a really good talk if you are someone that's coming from building applications, but maybe from building them on servers or containers or a different type of model of different services that you might be using. And you want to get started into serverless because we're going to be talking about some of the parallels there.

So with all of those kinds of personas or backgrounds that you're bringing to this talk, there'll be something in here for you today.

I have absolutely no doubt that in this room there are so many great application ideas, so many ideas for new features that you want to build, maybe at your work and kind of add on to the application that you're developing or something that you want to build for yourself personally. And the one thing that's really constraining us is just the time and the resources and the energy to get those applications out the door and into production and in use.

And the really cool thing that I want to show you today is how services and event driven architectures can combine to help you take those ideas that you have and turn them into applications even faster. Just get those ideas out into the world.

Give you a little bit of background on me - so I mentioned my name is Emily Shea. My background is actually not in technology at all. So I started and spent a lot of my educational career in Chinese language. Still have a really big passion for learning and speaking the language.

And I got started at Amazon in about 2016, I was super interested in AWS and the cloud and getting to know the technology. And so I started working on certifications, did a little bit of kind of coding bootcamps online and just started to build serverless applications and I really made a point to build something that was useful to me and I feel like that helps my knowledge kind of stick really well and gave me something cool to share on stage with you today.

I currently, as I've been in a bunch of different roles at AWS all around serverless and application integration, currently I lead a team of specialists working on application integration services, which is a lot of the ones that we're going to be talking about today.

So in this talk, I'm going to draw on my own personal experience and I'm going to take you through choosing AWS services, through the tools that I use to build and deploy and test those serverless applications and also critically about how to evolve your application and increase complexity and more services over time.

And lastly, we're going to close out with that live demo of Application Composer. And finally, of course, the big point that I'd love you to take away from today is the fact that serverless and event driven can help you build your applications even faster with the power of the cloud.

So I'd mentioned that I was just getting started, just getting into coding and building serverless applications and I wanted to build something that would be useful to me so that I could really kind of help those ideas sink in.

The idea that I had that was based on my background in Chinese language and the fact that I was finding it really hard to dedicate time and devote time in my day to practicing Chinese. And so it'd be really cool if I was able to send myself daily reminders with a Chinese vocabulary word. Just give myself that prompt to remember to study.

The pieces that I would need for this application would be some type of daily schedule to invoke this application. I need some form of a logic to pull from my database of vocab words and select the word that I wanted to send out that day. And then finally, I would need some kind of messaging system to be able to send that message on to me the user.

So that was the application that I was building. There's loads of different things that you could get started with if you just want to kind of jump in and start playing around with ideas - a couple cool ones is maybe an app that checks the weather and recommends what to wear or a workflow that allows you to take a file or an image and process that with some ML and a service.

Also a cool one that you can actually check out on the Expo Hall today is a live streaming or on demand video application. So the AWS Developer Advocate built Serverless Video as a demo that you can build and work with.

But what I want to emphasize is the fact that the services that we're going to be talking about today are not just in use for kind of like personal applications or small things that you want to build. These are in use by some really massive AWS customers for very large scale business critical production applications.

Some cool ones are PostNL which is one of the largest logistic companies serving the Netherlands processing over a million parcels per day. Nationwide Children's Hospital uses these services to automate workflows for petabytes of pediatric cancer data. And lastly, Taco Bell has a really cool use case around taking the order events from their delivery partner applications and sending those over to the point of sale systems.

So some really major applications that are using the services that we're going to walk through today.

So I'm going to start with just introducing our toolbox of services that we have for serverless. But before we jump into that, I also want to just take a minute to make sure that we're all clear on what is serverless and why serverless.

So serverless is really a new concept that we've built just because we found that customers were spending so much time on the undifferentiated work of managing servers. So traditional applications, if you wanted to build a new idea and you wanted to take a new application of production, you would need to provision and maintain and scale those servers and that infrastructure that your application runs on.

And what we did at AWS is we said, hey, I think we can automate that in the cloud and allow you to just upload your code and just build the business logic that is most critical and most unique to your application.

So with serverless and all the services we're talking about, you get no server management, those services scale up and down automatically based on the usage of your application. Pay for use billing model - so if there's no traffic hitting your application at the time you're not paying for it. And also built in features and functionality that are constantly adapting and constantly kind of getting better around you as we launch new cool things.

Ultimately, if you're only coding the things that make your application very unique, then you're able to get that out there and to get that into production even faster.

So I'm gonna use the kind of the traditional three tier application that you might be familiar with, you might have built with in the past, just to kind of show where the AWS service services fit into this architecture.

So starting at the bottom with our data layer, Amazon S3 Simple Storage Service is a great place to use for object storage - so files, images, those kind of things are a great place to put in S3. For databases, Amazon DynamoDB is a great database that is really optimized for using the serverless architectures. And I'm going to talk a little bit more about DynamoDB and what makes it cool in a little bit.

Moving on up to the application logic layer - so here we've got AWS Lambda, which you might be familiar with - serverless functions. So you can take a piece of code, put it into Lambda, and every time that Lambda function is invoked, that code will be invoked and your Lambda function will scale up and down as needed for the application traffic.

Next get into integration services - I mentioned this is what I could do Go-To-Market for in my current role. So these are a bunch of really cool services that provide very purpose built integration features within your serverless application.

So think about things like Step Functions for building workflows and orchestrating a number of AWS services into a particular order. Simple Queue Service for building message queues. EventBridge for event filtering and routing. And Simple Notification Service for sending messages.

All of these pieces, maybe you could code a bit of this on your own or you could choose a purpose built service that will have all these features of error handling and retries and all these things built into it right out of the box.

Lastly, as we get up into the front end presentation layer, we're going to need an API Gateway to allow us to serve API endpoints so that the users in the UI can interact with the back end of our application. We also might need authentication - so Amazon Cognito is a great service if you want to create some protected endpoints or some require some auth before interacting with those back end resources.

And Amplify - Amplify is a really easy way to have static content, so the HTML, CSS, and JavaScript files that make up your front end UI. Amplify is a great way to serve those and to serve those in a very distributed global way.

So this is the entire kind of stack that we have within serverless. I should say not quite entire because you can really pull from all kinds of AWS services in AI/ML and lots of different things that you can pull in here. But this is what I would consider kind of a core set that you can really do a lot of powerful application building with amongst the serverless services.

So coming back to that very basic kind of conceptual architecture that I had for my application that I wanted to build, let's go ahead and take this and apply those AWS services to this application.

So here at its very most basic, the application that I was building - I've got Amazon EventBridge which is sending a scheduled daily event to a Lambda function. That Lambda function is then running a bit of logic to pull from a list, it's just a CSV file at this point of all of my vocabulary words, it chooses a random one and it sends that off with SNS over to a text message and I get that on my phone every day.

So this was the most basic and probably the first application that I ever built with serverless and it was really kind of simple and easy to reason about and a good way to get started.

So within that, this plus just maybe a little bit more is pretty much all the code that I needed to run this application at this point. This is just taking a look into one of those Lambda functions - I'm importing Boto3 which is the SDK for working with Python. I'm instantiating an S3 and an SNS client so that I can make calls to those services APIs and interact with them.

Then I've got my Lambda handler and the Lambda handler is the piece of the code that's going to be invoked every time that this function is invoked and it responds to the event. So in this case, it would be that daily scheduled event.

I've got a little comment where I would make that call out to S3 to retrieve that CSV file. I assemble that word that I've chosen into a message that gets sent out. And then I've got a little bit of error handling wrapped around this call to SNS to publish that message.

So a really simple bit of code and this is all that you really needed to power the application at this point.

So that was the application at its most basic. Now, before I get into kind of evolving and adding complexity to this application, I think this is a really good point to start to introduce some of those tools and really the developer toolchain that comes into play around serverless applications because this will really make your experience of building serverless applications that much more enjoyable if you have a really strong toolset around you.

So I've got some examples of some of the ones that I found particularly useful.

First off, it absolutely makes sense to just jump into the console and start playing around with stuff. This is a great place to just learn about what's available, learn about what the different features the services have and how they interact with one another. So the console is a nice place to dip your toe in.

But I definitely recommend that you evolve beyond just deploying directly in the console and start to use an infrastructures code framework pretty quickly as you're developing serverless applications.

What's really cool about infrastructures code framework is it allows you to configure all of those resources and you're probably going to have a fair number of services involved in a serverless application within a particular just infrastructures code framework. And this allows you to know that you're deploying the application in the exact same way configured exactly as you want, how you want it every time that you deploy that application.

There's a couple different options that you have for infrastructures code with AWS - one of them, and the one that I'm going to be referring to, is the AWS Serverless Application Model or SAM.

A another one that's really popular with folks is the Cloud Development Kit or CDK. There's also some great partner tools out there. So Services Framework Terraform, all these are options to help you define that entire application as code. So we can take a quick look at an example SAM template.

So this is the template that I would have used for that, that very basic version of the application that I showed you. Um we've got some metadata about the template and then we get into defining our resources. So the main resource that I'm showcasing here is that Lambda function, the send text function function, we define the type. So this is a Lambda function and then we have a couple of properties. So we point to where the Lambda handler is located in the code, the runtime, which is Python in our case, this is the next part about policies is actually something really cool that SAM does for you.

Basically SAM is built on top of CloudFormation, but CloudFormation can get very verbose when it comes to defining all the different pieces within the service application. And so what SAM does is it abstracts some of those verbose pieces into really simple one or two liners that you can use in your SAM template. So in this case, I want to be able to define an IAM policy to allow Lambda to talk to SNS and to publish messages. And so I just use a SAM template, SAM policy template. That's just that one line of SNS publish message policy. And I point it to the topic name and the SNS topic that I want to call.

So a really brief way to describe what you're trying to build, passing some environment variables. And then the last bit is that thing at the end, it says event and this is all of the infrastructure code that I would need to define that EventBridge scheduled event that we needed to trigger the Lambda function. So this creates an EventBridge scheduled event that runs on a particular schedule and I just need maybe a couple of lines to define the S3 bucket and the SNS topic below. But a pretty brief definition of that application that we built.

So you can absolutely define your SAM templates by hand and kind of write those out yourself or you can use a really cool tool that we launched last re:Invent, which is AWS Application Composer. It's a visual designer, drag and drop those services that you're building your application into the application and then you can hit export and it gives you a SAM template right out of the box. This is the part that Narin is going to focus on at the end of our talk. And he's also going to take us through some of the really cool things that launched literally this week at Application Composer. So the service is moving fast and there's some cool stuff that's coming out there.

So we have an infrastructure code template, all of our application is defined as code and we've got our function code. The last thing that we need to do is deploy that application. And the cool way to do this is with a CI/CD pipeline. And I think that especially as you're building a serverless application, you're making a lot of changes to that infrastructure. It's really helpful to have an automated deployment pipeline. So that time that you make changes to that application, it's automatically getting synced up and deployed to the cloud because that will allow you to move so much faster. You can use CodePipeline for this GitHub. And this is kind of the flow that that would take. So defining your application in infrastructure code, uploading it to repository that upload to version control, triggers a new build and that deploys it up to the cloud.

Another big topic within Cervi is testing. So a lot of people get it started with Cervi and they might get a little frustrated because they're used to be able to test the entire application on their local environment or on their computer. And they say why can't I do that with Cervi? So I have a couple of tips around the uh the best way to test your serverless applications for, for the best results.

Uh the first piece is that you are probably going to have a few Lambda functions that do have some complex code logic within them. And for those ones, it does make sense to, to write a unit test and to test that that function code. Here's an example unit test. Um this one is again for that, that function that we talked about earlier. Um so it's using the unit test framework. It's mocking those calls to S3 and to SNS so that we're not needing to actually make calls out to the cloud services. Um and then it's running the test and testing that Lambda handler and seeing what if the response is as expected.

So for functions that do have a lot of complex logic within them, that you're coding yourself, this type of model does make sense, but you can see how you start to get a little introduce a little bit of complexity for yourself when you're mocking a lot of service calls because the cloud services, there's a lot of different ways that they can respond. Are you really using or what's the word I'm looking for? Are you really expecting all the types of things that they might give back to you? It can be really hard to perfectly mock and simulate a cloud service.

And so if you've got, if it's really about testing the service integrations that you're looking for in this piece of your application, then it makes sense to just get that application into the cloud and run your tests in a development environment in the cloud. One of the things that we have built within kind of the same tool chain to be able to help you do this even faster is SAM Accelerate.

So SAM Accelerate. What this allows you to do is actually just quickly sync new function code up into a development environment. And this will allow you to move very quickly. You don't need to wait for an entire application, deployment, entire resource and infrastructure deployment of your application. You can just sync up that function code and get your testing cycles to run that fast, that much faster.

So those are a little bit of the tips that I have around the testing and deployment and definition of applications and how you can build a really strong and effective tool chain for yourself for serverless.

So we have a basic MVP. We've got a tool chain set up that will allow us to move super fast. Now, let's get into how we might evolve this application to be a lot more complex and use more of those services in the toolbox that we pointed out earlier.

So we started from here. Now, one thing that I was getting requests from was uh so this at this point just sends text messages to me. Um I had some friends that were interested and said like, hey, I would love to be able to get these texts as well. I'd love to subscribe to the service. Um so I saw the opportunity to build out um the ability for other people to log on to a website and subscribe or unsubscribe.

Um and so I added this additional functionality. So the stuff on the right is pretty much the same. The only small change is that instead of text messaging, I've swapped it out to email just because I'm reaching more people. Um but then the entire left side is all brand new, using a API Gateway to have API endpoints where people can reach UI and subscribe or unsubscribe. I'm using DynamoDB to store all of the the user data and to figure out what kind of level of Chinese vocabulary they want to subscribe to.

Um and then I'm also serving that content up through, Apply to, to give them an UI that they can use to, to interact with my application. Here's a screenshot of that UI. Um and so some of the, the things that you can interact with and API endpoints that you could hit on there.

And at this stage of my application, one of the really cool things that I added was a database with DynamoDB. So early on all I was doing was pulling a full list of Chinese vocab words as a CSV file from S3. And that was pretty simple for just a once a day um kind of pull of data pool to get that information. But now that I've got users that are going in and they're updating their subscriptions or they're unsubscribing or they're creating new users.

Um the database, the data use cases became much more complex and it made sense to add a database in DynamoDB is particularly cool and that it's really optimized for serverless. So the pay as you go pricing model scales automatically, you can also use IAM authentication and the SDKs to interact with your data. So it's a little bit different from a database that you might have worked with in the past.

The other thing that's really key to know as you're getting started with DynamoDB is just the importance of planning your DynamoDB access patterns around the way that DynamoDB is architected.

Um so this is one that um I've kind of gotten into and it's helped me a lot as I build out uh more DynamoDB data models. So you might be familiar with a SQL data structure like this one. So you've got your entire database on a single server and you've got multiple tables that are kind of grouped around similar information. So you've got a user table and a list table. And if you want to find out which users are subscribed to which list, you might do a join of those two tables together. And that's a typical kind of SQL model that you might be working with.

Now, what DynamoDB does, it's very cool. It kind of contrasts with this model is with this model, if you wanted to have more data or serve more requests, the only way that you really have to scale is to make that server even bigger. Whereas with DynamoDB, they've really built around the concept of let's get you to scale automatically first. And so they take your data and they automatically shard it across multiple servers. You never really think about whether you need to scale up and or need to configure a very large database to start with. So it's a very cool thing that DynamoDB does.

But one thing to keep in mind is that you'll want to group your data in item collections to make sure that the data that you're reaching for the most often is in the same location and the DynamoDB is storing it all together. So think about your access patterns for my case, I wanted to have users to be able to log in and to see their subscriptions and to update those maybe add subscriptions or unsubscribe. So I want to be able to query by a user and then return all of the subscriptions that they have. So this is the way that I've structured item, item collections within my DynamoDB table.

Now, if you want to dig into this topic and see some really cool examples of this, I definitely recommend checking out Alex DeBrie's The DynamoDB Book um because this is a really great primer on kind of how to, to uh kind of figure out how to approach this and, and build your own data models.

So at this point, um I saw an opportunity to say, hey, let's make it a little bit more interesting for the users of this application. They have the ability to subscribe and unsubscribe from the emails and Chinese vocab words that they're getting. But why don't we give them the ability to save some data about themselves? Maybe update. Um have a user profile, be able to change their user name. And really the path for this is I want them to have the ability to save more interesting information about the daily word they're getting. So maybe have an example sentence or to store a quiz result if they want to take a quiz on the daily word that they have.

So I wanted to be able to have more kind of data about a user and store it in a way that's secure and only accessible by that particular user. With this. I introduced Amazon Cognito and then both of these, these API endpoints for updating user data and getting user data are both protected and require auth through Cognito for this pattern. And these are some of the UIs that I built. These ones in the previous one that I showed earlier are all built in Vue.js and serve through Amplify.

So I've got user profiles and the ability for people to click in and update their profile settings or update their subscriptions in the application now. So that's the application pretty much as it stands today. But I'm always thinking about new ways to take new services or new features that we're launching and integrating them into the application.

and that's kind of my tool for learning about new server services.

so here's some of the one or two of the ways that i've been extending this application recently.

one of which is that i had a really big uh feature request from, from a lot of people that were using the application that they wanted to not only be able to read the words and see the words, but they also wanted to be able to hear them and get some of that pronunciation audio.

um i really, i thought about this for a long time and couldn't figure out like, could i do i just go hire people to go say, like speak the words and be able to record them and that seems awfully labor intensive.

um until i realized that amazon offers poly, which allows you to take text files and be able to, to generate audio files off of them.

so i had a ton of words and vocab lists that i needed to run through polly. and i needed a way to just kind of iterate on those lists and save all of the audio files in test three so that i can serve those from my front end.

now, the service that i use for this is a step functions.

so step functions allows you to assemble a services into a workflow and make sure that you're taking steps in a particular order and then taking the outputs of those and moving on with the workflow.

so in this case, my step functions workflow, it starts out with a lambda function that reaches into dynamodb and pulls the word list.

then it starts a map state where it iterates over a batch of words and it calls amazon poly to generate that audio file.

then it takes the audio file key and save that to jb. so that my front end knows where to look for, for that audio file to be able to serve it to the end users.

then it has a weight state. and the reason it has a weight state is because the poly api has a weight limit.

so this is a great way to avoid running into throttling or a ps that have limits is to introduce a weight state within your set functions, workflow.

and then finally, it goes back and says, is there more, are there more words to batch through? and then it starts the whole batch process over again.

and here's some comments to tell you exactly that. but so this is the workflow that i built around set functions and poli to be able to add this new feature to my application.

and it was super quick to be able to build this and again, very little kind of code to write. but some really cool functionality that you could build by assembling various services.

the next thing that i'm thinking about um using some of bedrock and also the party rock playground that we built around bedrock is uh right now i have the ability for people to look at example sentences for the daily word, but they have to click, click out to a link and go see an external dictionary website or something to see some random example sentences on the internet.

and i think it would be really cool if i instead was able to generate my own sentences and attach those to that word so that people can view those.

i wanted to see if bedrock would be a good solution for this. and to be able to ask bedrock and say, hey, i've got this chinese word. can you generate a handful of example sentences and return them back to me before i wanted to kind of go in and actually write a bunch of code and start calling the bedrock api i just went into party rock which allows you to just kind of test some ideas against bedrock and it generated this application for me.

and so far, i've asked some native speakers and they said that there's there's some good descendants that are sounding really good and some that maybe need a little of iteration and prompt engineering to get the right responses back.

but it seems like this is going to be a really viable way for me to add example sentences and it was cool to be able to just test this idea out of party rock and now i can go build that and maybe build a set functions workflow that calls out to bedrock and add that into my application.

one thing that i often get questions about as i'm taking people through this application is what is the monthly cost associated with this app?

um so i've got a couple of stats about the traffic for this application. again, this is just kind of for my own personal learning. and so it's not enormously massive.

um but there's a decent amount of people that are on there and, and folks that have attended past sessions or wanted to subscribe and learn chinese.

um and so this is kind of roughly what this this application sees over time and the cost is shockingly low.

it is less than a dollar because i'm still below the free tier for a lot of these applications. but i also think that this is a particularly great use case for service where the most times that my application is frequented is just that one time a day where i'm sending out all of those messages and sending people their their word of the day.

and then the rest of the time that application does not need to be running does not need to be kind of having unused capacity.

and so i think this is a particularly well optimized use case for surplus so i wanted to end my portion, just sharing a couple of the best practices and tips that have helped me along the way as i've been learning service, first ones are all around developer tooling.

so building with a visual designer, like application composer, also step functions has workflow studio that's integrated with application composer. so you can build your workflows visually. that's a really nice way to get started and then also really cool is that those export infrastructures code template, so you can get running with that and deploy that quickly.

um i'd also highly recommend that you get started early with an automated deployment pipeline. this will make all of your kind of building your application and evolving it over time, really really smooth and i really enjoyed working with this.

lastly, there's definitely occasions where you'd want to test some complex function code, but i definitely recommend testing in the cloud if you're trying to understand how services interact with one another and then my other tips are around application design.

so as you can see with this example, i've really started small with a simple mvp and taken that and involved it in complexity over time and added new services.

and that's one of the really cool things about service is that it is super extensible with, if you've got events that you're emitting, that you want to build a new, a new feature on top of or you've got a workflow that you want to add a new step into the extensibility to really evolve to meet new and changing requirements. is one of the biggest things that when i talk to customers in my day job is one of the things that keeps them coming back to serve us.

and also as you think about dynamo planning your database access patterns or organizing your dynamo tables around item collections is going to help you have a really nice data model that will sustain your application over time.

so um the qr code will bring that up again towards the end. but with that, i am happy to pass over to n and to take us through the demo. thanks.

so malik, as with anything at aws, let's start with something with you, right? with you as in customers and customer requirements. let's pick one of the customer requirement today and see how we can go about building that using air application composer service services and a bit of a i enabled coding companion as well.

so as we see on the screen, let's pick up one of the requirements wherein we have a bunch of documents. let's say how do we go about summarizing them and saving them in a back end persistent database and also build an api so that our users can use either a web application or a mobile application and access that api access the summary results which are stored in our database.

so if we understand the core of this particular requirement, we see two required two flows. one is an asynchronous flow wherein users submit or upload the documents in our storage service. they need to be picked up by a processing engine or a processing uh compute. and that needs to be analyzed with the a i model. and that result should be saved in a database that is our asynchronous flow.

whereas for the synchronous flow, we want the responses when users access that api to be as quick as possible, wherein it goes to the back end database and pick the summary of those documents and serve it to the end users, right?

so these are the two key requirements for our document summary app. let's see how we can go about building that.

ok. this is my aws console. i'm going to log into application composer. this is the application composer service page.

so either we can open a demo project and for our case, we will create a new project for our document summary app.

now on this application composer page on the left, you see that what we call as resource cards and there are of two types.

so one is the enhanced component. so the enhanced components could be a combination of one or multiple cloud formation resource. for example, if we talk about lambda function, that lambda function, if i drag and drop into my visual canvas here on the right side, it will of course launch the lambda function. along with that, it can also contain what it requires in terms of im role, the log group where it can emit out the events to etc.

and the other type of resource card is a standard i resource, these are the individual components. and as you can see, there are almost, you know, 1000, 149. as of today, you can make use of a combination of both.

and on the right side, we have the canvas where we can drag and drop these resources to while doing so, it's going to generate the template as we speak.

ok. let's start with our storage service for that. i'm going to search for s3 bucket and drag and drop into my canvas, the s3 bucket.

so if i click on details, we can give the name of what the purpose of this documents s3. i'm calling it.

now, if you notice here by default application composer is following best practices by blocking public access to this storage bucket. and also if i go into the template side of things, it is enforcing the storage encryption by using aws at the same time, it is denying traffic which is unencrypted.

so it's basically enforcing both both encryption at rest and in transit. ok?

we want the summary results to be processed for that. let's leverage this lambda function. let's call it process files. ok?

now you see for each resource which we are dragging and dropping. there are the resource properties this this, although you know, it has good enough properties, it may not be full exhaustive list of all the supported properties, but you can get started with what you have here and whatever is not available, you can directly go to the template and add them as well. ok?

i've gave it a process file lambda. either you can define the package type, whether it is a zip or a docker image. i'm selecting zip here, i'm selecting the source path as well. we can select the run time. i'm going to select python, you can select the architecture and so on and so forth, you know, memory cpu provision or reserved con currency, etc, i'm clicking save.

now, as you can see for each resource card, there are ports, one on the left and there is one on the right as well.

so one on the left indicates which resources can invoke this particular resource or this particular market in our case. and one on the right indicates what this resource can further invoke or you know, pass on that event.

thus, you know, creating an event driven architecture.

now, as soon as i click on this particular port, you see that on the lambda function, the left side of the port is already highlighted, which means that if i do a connection, it is a supported method.

so with little guess work, you would be able to create architectures in this application composer. ok?

as soon as i connected it did something for us. so what it did is that it has already added a code in our s a template or in our template saying that it is going to trigger that lambda function on every object create or removal, right? which is what exactly we need.

we need this lambda to be triggered when somebody uploads the document in our storage. ok?

let's go ahead and save the results in our database for which i'm going to leverage amazon dynamodb table. i'm calling it summary damon again, you can go with the defaults or enable your own configuration like sort key or exploration key, etc.

let's go ahead and connect it as well. once i connect it again, the same thing, it's basically adding the variables of that lap of the table so that when i write or when i build my business logic in the lambda function, i can refer this table name as a variable, right?

and at the same time, it is adding an im policy to the lambda function. in this case, it is creating a create read update, delete policy, which is a correct policy so that my lambda function has enough permissions to read and write to the back end database. ok?

as we talked about our function, it also leverages our application leverages amazon bedrock to do the document summarization

So let's go ahead and give it additional permissions to invoke a model for that. I'm going to leverage a standard IAC resource in this case. If I drag and drop it here, unlike our enhanced component, the standard IAC resources, we need to give it configuration YAML.

So I'm going to call it Bedrock Access and give it a permission as a policy document. Perfect. As you can see the template is, you know, it's generating the template as we are dragging and dropping the resources.

Okay, let's refer our Bedrock Access here in the policies. I'm giving you two permissions - one is to read from the S3 bucket so that it can access the Lambda function, can access the files. Okay? So this is our synchronized, the asynchronous flow here where the users, when they upload an object into the S3 bucket, will trigger the Lambda function.

The Lambda function would process the documents and then save the summary back in our DynamoDB table, right? Now as this is on the console, we have two options - one is either saving the template locally or activating something called Local Sync.

So what this Local Sync does is that what we're creating in the Amazon AWS console, it's going to save it in our local file system from where we can either check it into our version control system where your pipelines will take over or you can use AWS SAM commands to deploy it.

So let's see that in action now. I'm going to save this and okay, I'm going to create a new folder, document summary. Perfect. What it's asking me is that it's asking me for temporary permission, the session with permissions so that it can read and write the files to my local file system.

Perfect. So auto save is enabled. My configuration is saved to my local file system. So what I'm doing is I'm going to open up my favorite IDE - VS Code. Let me open up the project folder and see what it has created. Okay, perfect. So this is the template which it has created with all the resources which we see, right?

So the document S3 bucket, the bucket policy, the process Lambda log group, summary table, and Bedrock Access. Now as announced this morning, we can actually leverage or use Application Composer right from VS Code, right?

So if I open up the Application Composer within my IDE, I can continue my creation, application creation, right from the IDE. Okay?

So we only did half of the application for now. Let's do the other half, which is the synchronous flow or the API. So for that, I'm going to drag and drop the API Gateway into my console. Okay, let's call it Summary API. You can add, you know, authorizers or methods like GET by ID, etc. for which we don't need it.

I'm not, I'm going to save or use the defaults for now. You can configure the CORS if you have that or use external API file definition as well. Okay, perfect. I'm saving it similarly to our process Lambda function.

Let me also do a Lambda function for our GetSummary. Let me call it GetSummaryLambda. Perfect. I'm saving it. Connect both of these - I want this GetSummary to fetch it from our DynamoDB table. Okay? This is our architecture done. Design is done. Configuration is done. Template is ready to go.

So let's go ahead and deploy this template for which I'm going to open up a new terminal. Let me zoom in a little bit. Yeah, I'm using a SAM CLI command called SAM sync to quickly sync or validate this template which we have built, the artifacts, and go ahead and deploy it.

I'm going to give it a stack name - our document summary - and specifying the region where it should go - us-east-1. Okay, it's basically asking me - SAM sync is generally recommended for a development environment. Do you want to go ahead? For which I'm going to say yes.

Okay, it is validating the template, building the artifacts, and also creating the CloudFormation stack in my AWS account all with one command. And also you can use a watch attribute to the command which we used where it will keep syncing all the changes which we do into the AWS environment so that we can quickly test it.

So as this would take a few minutes, what I've done is I've already deployed the same, very same architecture and let's see things in action where we can test it, right?

So what I've done is I have this template, you know, this is the very, very exact architecture which we have seen just now. And in this case, I also have the Lambda function or the business logic created or rather generated would be the right word because I've leveraged Amazon CodeWhisperer to write up this particular business logic.

So what it is doing in our case is that this particular process Lambda function is leveraging Boto3 or Boto3 framework and summarizing the documents which we're going to upload in under 100 words using Bedrock or in this case, Anthropic Cloud model and saving those in our DynamoDB table, right?

And I have a bunch of files here on the left side. So if I quickly check, so these are like project documentation, product documentation such as DynamoDB, Application Composer itself, right? So almost 7800 lines of documents which we have.

So we want it summarized by our application in under 100 words. So let's go ahead and upload these documents and see what exactly happens. Okay? I'm using AWS S3 CLI command to copy this to our storage or S3 bucket. Okay, perfect. The document upload is complete.

Let's go to AWS console and see if they are being processed. Okay? I'm using the pre-deployed version which is in Oregon. So I'm going to that region. This is the DynamoDB table. And if I check Explore table options slowly, oh by the time they're already processed as quickly. I mean by the time I came here, now you see, okay, let's pick one of the latest processed document.

For example, it picked up a Route 53 product documentation and it summarized in under 100 words what Route 53 does - "Amazon Route 53 is a highly available and scalable DNS system", right?

And similarly, if you want to check the API which we have built, either we can go to the API Gateway, open up the API and test it here, which should fetch us all the list of the items in our DynamoDB table in our DynamoDB table, right? For example, this is one and also you can use tools like Postman to basically request that.

For example, this is the summary for the lambda.txt. There is one for the Well-Architected Application Composer, etc. Okay, coming back to our Application Composer - although we have created the architecture, deployed the SAM or the template or architecture, tested it all out, we can keep adding more and more resources or changing the attributes right from the Application Composer so that we can always update our architecture.

Okay? For example, as Emily talked about the frontend, you can also use Amplify, drag and drop here, and then go to the resources and create a frontend, connect it to the backend storage or this S3 bucket, and connect it to the API so that it becomes a full-fledged application on its own, right?

Also similarly, let's assume that we talked about documents right now but what if you, if you have a complex workflow which you need to achieve - for example, that document needs to, you know, remove any PII before being stored in a, you know, backend database. How do you achieve that? For that you can leverage something called Step Functions, state Step Function state machine, right, from the Application Composer which is now in VS Code to go ahead and configure and you can visually now also within the Application Composer console create the full state machine and you know create those distributed applications, right?

So you can do that, save. Oops yeah. So you can basically replace this Lambda function with a state machine and go through a complex workflow if you have such a requirement, right?

Okay, so far we have talked about something which is, you know, brand new. What if you may already have a template? Let's see how, how that will look like. So if I open up a new window, okay, what I've done is I've already picked up a sample project from AWS Samples which is CQRS which is a Command Query Response Segregation pattern.

Let's see how does that look like in our Application Composer? Okay, this is a massive template with lots of resources. So by just loading that template in my Composer in the VS Code, it is showing you visual representation of the whole architecture, how one component is integrated with another, which event is triggering another one.

For example, in our case, this particular QueryItemReport Lambda function is fetching the data from this database, but it is using IDS Proxy to connect to it. So with one single template, you can see the full state of the architecture, right, from the Application Composer.

Okay, so if I switch back to the yeah, so if you want to continue your service learning journey, you can use AWS Skill Builder where you have lots of resources in terms of training plans, you know video learnings you have, and also you have ramp up guides with which you can start with your learner, serve as learning journey and you can earn the CUS badges in terms of serverless and workflows, etc. You can find them in start.aws/cus-learning.

Yeah, as Emily mentioned, you can check out the CUS video which is built on CUS architecture. It also leverages generative AI titles and it uses video processing etc.

Okay, all the session resources which we have talked about today, especially on the Application Composer, the serverless services, everything you can find the full list of resources in that particular URL.

And more importantly, Application Composer is a free to use service. There is no additional charge for you to use Application Composer. It's only that you need to pay for the resources which you deploy using that - in our case, the S3 bucket, the DynamoDB table, the number of Lambda invocations, etc.

With that, we want to thank you for your time and joining us today. Hope you found it useful. Please do take a moment to, you know, do the survey in the AWS Events app. Thank you so much.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值