Build production-ready serverless .NET apps with AWS Lambda

All right, good afternoon, everyone. You are all here because you are developers and architects who want to learn the best way to build service applications with .NET. It is as simple as that. And to start, I want you all to imagine a time in your developer career where you've had an application fail because of something completely unrelated to your application infrastructure - maybe it was a hard disk failure, maybe an entire data center overheated. And I wanna see a show of hands - who really enjoyed fixing that issue? No, nobody.

When you're building service applications with .NET, you start to avoid some of these problems, you start to avoid some of these issues. And over the next 55 minutes, this is exactly what you're going to learn.

Now, you're gonna learn all the skills you need to be a pro serverless .NET developer. And you're gonna do that by walking through the entire software development life cycle from build to test, to deploy to operations.

I'm James Eastham. I'm a senior cloud architect here at AWS and I'm later going to be joined by my friend and colleague Craig Busy, but let's get straight into it. A quick show of hands just to get started - who's already using Lambda in some way, with .NET? Ah, lots of hands. Awesome. Wherever you are in your serverless journey today, if you are or you aren't, there's gonna be something in this session for you all.

And one of the important things as you start working with Lambda is that there can be quite a few ways that Lambda is different to how you might be used to building applications. So let's just start by looking through some of them. And we're gonna do that within the context of an existing application.

For the rest of this talk, you are all now developers working for a completely fictional bank. This is the wider application architecture that you have. But to start with, we're gonna focus on this little piece here - you all need to build out a web API backed by Lambda with data stored in DynamoDB. Nice and simple.

And we're gonna start walking through how Lambda actually differs from your traditional model with the traditional ways of building applications, whether that's on a VM, on a container, it looks a little something like this:

Your application starts at once. Requests come into that environment and they're handled concurrently within that same running instance of your application. With Lambda, this differs pretty fundamentally - your execution environments, your application isn't started up until the point in which the first request comes into your environment. And then each of these environments can only process one request at any one time.

So if you receive two requests into Lambda in the exact same millisecond, or there's a request already being processed, when your second one comes in, your entire application needs to start up again. That initialization phase needs to happen again.

And the other interesting thing with these execution environments is that they all get their dedicated set of resources. If you allocate a gigabyte of memory to a function, each individual request is gonna get its own dedicated gigabyte. And that of course differs to the server based model where all of your requests share that same pool of resources, that same pool of memory and CPU.

And that's the other way that Lambda differs - the way resource allocation works. If you have an existing application and you need to handle more requests, your bank suddenly has a load of new customers. Well, that typically means you're gonna need more resources and more resources, bigger servers, bigger containers typically mean more cost.

And as you start working with Lambda, you realize that that isn't necessarily the case anymore. You learn that the only resource knob that you have to tweak is the memory configuration. And if you are optimizing a CPU intensive workload, that can lead you to ask the question - how do I actually change the CPU of my own functions?

And Lambda allocates CPU in proportion to the amount of memory that you configure. So as you increase the memory allocation, so does the CPU and so does the network bandwidth as well. What that means really interestingly is that memory allocation, performance and cost don't actually scale linearly anymore.

The final way that Lambda differs is with the programming model itself. This is your typical example of what we would call a single purpose handler. And it's not important to read all of this code here. The important bit I want to draw your attention to is this line here - because up amongst all this code, this is the only line that's actually doing anything relevant to your business logic.

The rest of this code is all boilerplate - mapping the inbound request that comes in from API Gateway, mapping that to your domain, and then building the response to go back out again.

So you might not be sat there thinking "James, this just seems very different. Why would I want to do this? Why would I want to build serverless applications?"

I've got to think about all these different things. And actually I want to start by telling you that it's not necessarily the case that you have to do that. And I'm gonna demonstrate that to you by walking through the build phase, how you actually build Lambda functions.

Of course, you're all .NET developers, you spend time writing a lot of .NET code. And the first thing I wanna tell you is that you can keep the same familiar, warm, fuzzy programming model you're used to when you're building with Lambda.

And this is the situation you find yourself in with the bank - you have an existing ASP.NET application and you think to yourself "Wouldn't it just be wonderful if I could just pick that up and run it on Lambda?"

And actually, you can do exactly that if you add a reference to the Amazon Lambda ASP.NET Core Server NuGet package.

And if you're building ASP.NET applications in a more traditional way, and I mean that by not using minimal APIs, you need to add a single new class to your project - call this class whatever you want, you can call it banana if you wish, I would recommend using something sensible like LambdaEntryPoint.

What is important about this class is the base class that you inherit from. And that's gonna be one of three different base classes that come from the Amazon Lambda ASP.NET Core Server package.

And these base classes all have a method called FunctionHandlerAsync. And that is what Lambda is actually going to invoke when a request comes in. And that method contains the logic to transform that request that comes into Lambda into something that ASP.NET can understand. And then it does the same again on the way back out.

And that's why the base class you use differs depending on what it is you're putting in front of Lambda - because the request payload that comes into Lambda is gonna be ever so slightly different whether that's API Gateway, or an Application Load Balancer, or whatever that might be.

The other thing these base classes allow you to do is to override an Init method. And in that Init method you can then, as you can see here, set your Startup code. And here you're using the exact same Startup class that you already have in your ASP.NET application.

Of course you could equally have some Lambda specific initialization logic in here as well.

If you're using minimal APIs, you're building APIs in a new way - this gets even simpler. In the Startup in your Program.cs file, you simply add builder.Services.AddAWSLambdaHosting, specify the enum for again what it is you are putting in front of Lambda, and this API is now ready to run on Lambda.

And one of the really cool things about these libraries is that they're context aware. So you can now take this exact same ASP.NET application, run it locally on your machine, run it in a container, and run it in Lambda - and it's contextual. So it understands when it's running in Lambda and it will start up differently and it will run differently based on that.

And this is really cool because this allows you to move really quickly, it allows you to get started, it allows you to get some of these benefits of using Lambda, and it keeps that familiar programming model that you're used to - you're just building with ASP.NET, it's nice and simple.

It does have its tradeoffs though, and those tradeoffs specifically come around the initialization when a function starts up. So what you're seeing here is a comparison between a .NET 6 function that's built using single purpose, and a minimal API running on Lambda.

And this is the exact same application - that QR code you can see there will take you to a GitHub repo where we have benchmarks across all the different ways you can run .NET and Lambda. And you'll see once an execution environment is available, it's warm, the performance is pretty compatible - .NET is a performing language, you all know that.

However, when you start to look at the startup phase, and remember this is gonna happen quite often with Lambda, you see that there's a considerable difference in the startup there.

One of the really interesting things about this though is that when we run these benchmarks, we run 100 requests a second for 10 minutes. And that typically equates to about 150,000 requests. When you're running minimal APIs on Lambda, typically that's about 90 cold starts.

So although the cold starts are longer, you're likely going to see less of them. So with the single purpose handler, it's typically about 300 or 400 cold starts.

But actually, with your banking application, this isn't quite what you need. You need something slightly more performant. You want to do this, you want to keep this familiar programming model, but you don't want to use ASP.NET.

So what do you do now? This is where I want to introduce the Lambda Annotations framework. Who's used the Lambda Annotations framework actually? Who's heard of it or used it? If you haven't - awesome.

Let's revisit that same example you had earlier, the single purpose handler where you've got a single line of code for actually doing business logic. And if I just click my fingers there, this is the exact same application written using Lambda Annotations.

And you see now this is simple, it's focused purely on your business logic. All it is is a method and you've got these two attributes added to your method. The first one, LambdaFunction, that tells Lambda Annotations that this method is going to be its own independent Lambda function. And this allows you to have multiple functions defined in the same code file like you would with a controller class if you were building with ASP.NET.

And then that second attribute, HttpApi, tells you that you're gonna put an API in front of this Lambda function. This works using .NET source generators. So at compile time, Lambda Annotations looks for these attributes and it will actually generate all of that boilerplate code at compile time.

So it's all still there, it all still exists, you just don't need to worry about it anymore - we generate that for you. This also allows you to use dependency injection in Lambda - your familiar Startup.cs file lives on. You can then annotate that with LambdaStartup. And the same thing happens when you compile the code - that source generator is gonna generate all the wiring to actually allow you to use dependency injection with Lambda.

A word of warning with that though - just because you have access to dependency injection doesn't mean you should add a whole bunch of dependencies to your startup, because any work you do in your startup phase directly impacts that cold start time. And that's typically why you see ASP.NET having slightly longer cold starts than .NET - because you've got to startup ASP.NET.

One of the beauties of Lambda though is it allows you to optimize really specifically. So this is where you start with this banking application - you run ASP.NET on Lambda behind API Gateway, you have all your endpoints route to that same function, and that starts off okay.

And then you actually realize that for 9 of your 10 endpoints on your API they all are okay with the startup of ASP.NET. One of the cool things you can then do is start to optimize exactly where you need to.

So you can use the routing at the API Gateway level to maybe break out one of your endpoints - maybe your endpoint for getting the account details is running slightly separately. Then you can optimize that differently and you can then ask yourself the question - what exactly is it I want to optimize for? Is it latency? Is it pure performance? Do I have different resource requirements or different security requirements?

And then you can break that into a separate function and maybe for the majority of your endpoints you can run ASP.NET - it's completely okay to run ASP.NET on Lambda, to run multiple endpoints in the same Lambda function, or Lambda Lith as that's commonly known, because like I say, although your cold starts might be lower, you will actually see less of them typically.

And if you're interested in interested in learning more about building low latency event driven applications with .NET, then I'd really recommend going to check out session SV S3 08 later this week. That's with Marcia and Tim from AWS and then Brenner from Second Dinner and Second Dinner built a number one rated mobile game we're using entirely Servus and .NET so it can be performed, you can use it in production.

So now you're in a really, really cool place as developers because you've got this ability to either use a SP .NET where it's familiar and then you can break out specific end points to optimize them exactly where you need to. And after running this for production in a couple of weeks, you're, you're happy mostly, but you're still seeing a couple of issues, a couple of challenges around latency.

And this is where I want to hand over to Craig to talk to you more about Observable and instrumenting your .NET Lamda functions 20 days. So now you have gained the benefits of reliability, scalability and resilience that come with running your applications on Service. But if there are errors or performance issues in your application, how do you track them down?

What if I told you that it's easy to overcome these challenges and we have standard tools and processes that will help you avoid contentious situations. Like the one you see here, my name is Craig Bossi and I am a solutions architect here at AWS NFL o.net and cus aficionado. Let's talk Observ.

Let's go back to that banking application that we were talking about before. When your user interacts with the system, they may call an API to submit some information about a loan which calls a limb to function, which may call a downstream API with another lambda function and maybe updates a DynamoDB table.

Then when it's done, it sends a message to an EventBridge bus and that triggers a Step Function which runs a whole bunch of Lambda functions to create some forms for that borrower to fill out when that's all done, it queues up a message which will ultimately be sent out as an email to the user to fill them out.

Well, this simplified process has like 12 components in it. In real life, you'll probably have a lot more because of this decoupled nature of serverless applications. It's extra important to know what's going on inside your system. And Observ is that ability for you to understand the internal state of your application.

When you're running monolithic applications on prem, it might be easy to track down issues because everything is concentrated into just one or two big code bases with serverless. It's definitely going to be different, not necessarily more difficult though. And when we talk about observability, you're going to hear us talk about the three pillars.

These are tracing logging and metrics and these are critical for the understanding the performance of your service applications. And before kind of going into the weeds here, i want to let you know preemptively that it's going to be really easy for you to implement observable in your lambda functions.

Power tools for AWS Lambda .NET is a set of libraries distributed by NuGet that are designed to help you better operate your lambda functions. They have simple patterns to help you observe, implement uh implement observable best practices in your functions without having to fill up your code base with boilerplate or making a roll your own observable solution and power tools for net also extend beyond observable into things like item potency and parameters.

So if you're working on any cus application, i recommend you check them out and power tools are really simple to configure. After you've installed the new g packages, you can either configure them directly in code or you can more commonly use environment variables.

And you can see here that there are parameters for logging tracing and metrics that control things like whether or not to collect the incoming event to the sample rate for your metrics and the metric name space. And this is especially useful when we're talking about serverless. Because when you use infrastructure as code, you can uniformly apply observable configurations to all of the lambda functions in your servers application.

And now we're going to look and see how power tools are going to make both your code and your life a lot simpler. Wouldn't it be convenient if you could just track a single user's request throughout your entire system? Maybe some of these independent components could somehow be correlated and be aware of each other. That is where tracing comes in in AWS X-ray is a service used for distributed tracing.

When a user makes a request, then a trace id is generated for instance, when they call an API Gateway that request id is then passed on to downstream services which then in turn pass that id down on to downstream services and all along the way, they're reporting data to the X-ray service so that all of the activity within your application can be correlated to each other.

And now you can collect individual pieces of data for each component such as the response time that you see here. But for the X-ray service to be able to report data for the entire application, you need to make sure that id is passed on correctly to each service and not every service handles that the same way.

And you're probably thinking well, that means it's just more complications to deal with. But i'm happy to let you know that when you're writing your code, it's actually pretty easy to maintain that continuity in your net lambda functions and make sure that the trace id is passed correctly.

And you can instrument calls to other services and downstream components to understand their performance. If you're using the AWS SDK for .NET, you just have a single method call that will then allow you to instrument all of the calls to downstream AWS services, right from within your lambda code.

If you're using external HTTP services, you can, there's a library to help you instrument those. So you can understand how they affect the, the performance of your system. And you're probably all well aware that databases can frequently become a bottleneck in the performance of your applications.

Well, there's a library that allows you to instrument SQL Server calls. So you can more closely investigate those issues. But sometimes you want to have an understanding of the performance within your functions. And power tools can really help you with that.

For instance, you might want to collect data about certain parts of your code and, and understand for instance, when a cold start is happening. Well, it's for that there's a little bit of a hack you have to do. Then if you want to know, for instance, when different segments of your code have performance issues, you can, you can collect data about those in tracing. We call those subsegments in an X-ray.

What you have to do is at the beginning of a subsegment, you make a call to X-ray at the end when it's over, then you make another call to end the subsegment. But if you have a lot of this, it could get complicated. However, once it's all done, you have a great trace, you've got information about the individual pieces of your application and you get a service map of your serverless application.

But power tools makes this way simpler for you. All you have to do is add the tracing attribute to your handler. And all of a sudden that cold start thing i talked about automatically taken care of for you. All you have to do is configure it and to collect those subsegments before.

Remember you had to do a start and an end of each segment. Well, with power tools, all you have to do is take individual method calls within your lambda function and add the tracing attribute to those and supply the subsegment name. Then every single time that that method is called power tools will report that as a subsegment to X-ray.

Therefore, you can collect all that same information as before with less code. Now when you write your net lambda functions out of the box, they come with logging to CloudWatch enabled by default. And ultimately, lambda is going to pick up anything that goes to standard out or standard error and send them to CloudWatch log groups.

And if you've ever done anything with lambda before which most of you have, you know that this is, you probably are already aware of this. Well, there's a couple of built-in ways to do this. You can use the Lambda Logger la method, which basically is an encapsulation of console dot right line, which you can also use, you can also use the logger property of the context object that's passed into every lambda handler.

And then you can do things like specify the log severity. And with the recently released advanced logging controls for lambda, you can automatically collect unstructured or OpenTelemetry conforming structured logs right within your lambda functions.

But what if you want to automatically collect more detailed contextual information about your handlers? Well, when you use power tools for logging, all you need to do is add the logging attribute to your function. Then with one configuration, you can capture the event that triggered your lamb to function to the logs.

And by using the logger class that comes with power tools, you can also create a uniform log structure that will include additional context with every log you create such as X-ray trace id, whether or not it's a cold star and other information about your function.

And you can even pass it in objects and those objects will automatically get serialized into your structured logs. And this is great because then you could use this structure later on to more easily query your logs using things like CloudWatch Log Insights.

And CloudWatch is great at collecting metrics about pretty much all compute in AWS and Lambda is no exception. And Lambda comes with several metrics out of the box specific to it such as invocation time and concurrency that are helpful for you to understand how your applications are performing.

But sometimes you're going to want to collect metrics that, that are related to either performance or business that you need to define to understand how your applications are running. So when you use the AWS SDK for .NET, you can do just that.

And here you can see there's 20 ish lines of code that allow you to collect one data point about say the size of a file that's uploaded. But even though it's a lot of code when all is said and done, you've collected your metric and you can use this in dashboards or with uh with CloudWatch alarms.

However, in true power tools form, you can annotate your handler with the metrics attribute. And then a lot of that configuration is added for you automatically and then you can use the metrics dot a metric method and you can log that same data as before in significantly fewer lines of code.

So what does all of this get you three separate ways of troubleshooting your lambda functions? Not necessarily by combining those correlated traces logs and metrics, you are going to gain that vision you need into your applications. You can see a map of a large portion of your application and trade and look at individual requests down to the one user's activity and you can use the metrics both automatically collected in custom to understand the performance of those individual components in your application.

And then you can use those correlated logs so that you can better understand what's going on deep in your lambda functions. So putting all of this together without power tools, you can, the more complicated you get with your observability, the more lines of code you're writing and it can get complicated. But by using the techniques that i just talked about now, you can keep all that functionality and significantly reduce the boilerplate that you're including in your code base.

But now that you know how to collect this data, how do you know what to do with it to improve that performance? James a really interesting place as developers, you've built out this suite of lambda functions, you're using best practices to build them. You've got observability and instrumentation to understand them. Now, you just wanna know how to make it fast. How do you get the performance that you want on lambda?

And throughout this next little section, I'm gonna show you how you can get the same performance with .NET on lambda as you can use in languages like go and to do that, let's revisit that same architecture that you had earlier. You've got a SP .NET running on lambda and then you've got that single function that you've split out to optimize differently, which is that one there?

And now you've got observability and instrumentation that can actually really let you deep dive that specific function and look at exactly what it is that's causing your issues. And you actually work out that it is, of course, cold starts the elephants in the room when we talk about .NET in London.

So let's actually explore now how you can optimize cold starts, how you can kind of negate some of these problems. And before we get into the specific things, there's a couple of things i just want to point out the first is that under a steady state load, cold starts typically account for less than 1% of the invoke, then benchmarks you saw earlier is typically about 0.4% of the requests are cold starts.

The second thing to consider is that as a developer, every single time you publish a new version of your function, you are guaranteed to see a cold start. So when you start using lambda, when you're working out, if cold starts are, are gonna be a problem for your workload, don't just deploy your function and then run it once and think ah cold starts. They're not gonna, it's not gonna work for me.

Run your function with some actual load under it. Something like you're actually gonna see when it's actually running in the real world. And that will then tell you if cold starts are actually gonna cause a problem or if they're not. And the final point is that cold starts are typically only relevant for synchronous in vox.

If you're doing asynchronous work, reading messages off a queue for the vast majority of use cases. A 809 100 millisecond cold start isn't going to cause you an awful lot of issues. It might in some cases, of course. So to look at how to optimize a code start, let's actually look at exactly what it is that happens when a request comes into lambda, an execution environment needs to be created after that environment gets created, your function code is downloaded, the .NET run time is started up and then the actual initialization of your code happens, objects are instantiated code in your constructors runs.

And that entire period of time there is the cold start. That's what's happening. Once all that's happened, then the actual payload can be passed to your function and the actual request can be invoked, the actual handler can run. And when you think about this, the actual only two bits that you can control as developers are these two here the size of the board that the how much code you have the bundle and then the actual initialization phase.

So two really quick things you can do to to get started is to make sure you keep that bundle size as small as possible, minimize your dependencies and then minimize the amount of work you do in the start up of your code. Now of course, you might do them two things and you still might see some challenges with the code starts.

So the other things you can look at, you can look at a feature of lambda called provision concurrency. And that is something you can enable without changing a single line of code. It's a feature of lambda that performs that initialization phase ahead of time. So then you have these set of execution environments up and running all the time and you can set um how many you want, you might say you want 10 provision environments.

It's a really, really fantastic performance feature. It was built specifically to mitigate call starts. And now the first thing I'm sure you're all thinking now is what about the cost of that? Because provision can currency cost you money, right? You've got these execution environments within all the time. That means it's gonna cost you.

And the actual answer to that question is, is more nuanced than that. It's just gonna cost you money. So let's just have a quick look at the pricing of lambda and how this works

"So this is the pricing in us-east-1 and this differs ever so slightly from region to region.

The on-demand pricing is nice and simple. You just pay to the millisecond for exactly how long your function is running.

The provision concurrency cost is split into two numbers. There's the provision concurrency cost, which is what you pay all the time, as long as that provision concurrence, that provision environment is available and then you've got the invocation duration which is the same as on demand that is charged to the millisecond.

Now, if any of you are really quick at maths are really, really observant, you'll notice that if you add together the two provision concurrency numbers that actually works out cheaper than on demand. So what that actually means is that if you're using provision concurrency and you're fully utilizing the environments that you have, that works out about 16% cheaper than actually using on demand. And the tipping point in us-east-1 is about 60%. If you're using more than 60% of the environment you've got provisioned, you typically see a lower cost.

And this is one of the really interesting things with lambda because now you can get both a lower cost and if better performance to do this, obviously, you need to be able to know, have a relatively stable workload. So you can automatically scale your provisioned environments up and down using application auto scaling. But of course, you need some element of steady state workload to be able to actually get the right number of concurrent execution environments.

So this might not be right for all workloads. But if you've got a steady state, this can give you better performance and a lower cost as well.

Now, if that won't work for you, what other options do you have? The first one and one that's really important actually is right sizing your lambda functions. When you have the right all resource allocation to your function, you can minimize cold starts, minimize the run time and minimize the costs. And what I mean by right sizing is that you can allocate between 100 and 28 megabytes and 10 gigabytes of memory to your function and every single function will have an optimal allocation.

Now, of course, one option you have is to take your function and manually go through and change the memory configuration and test it and change it and test it and that'll probably take you a little bit of time. The other option you have is to automate that entire process using lambda power tuning.

So lambda power tuning is something that you can deploy into your account. And it uses step functions under the hood. You invoke a step function you pass in the lambda function. You want to test a sample payload and the array of memory configurations that you want to test for power tuning will then go off and it will run you a function over and over again with all these different allocations. And it will give you a lovely looking graph that looks something like this.

The blue line is the cost and the pink line is the performance, the implication duration. And as you might expect between 100 and 28 megabytes and 10 gigabytes of memory, there's quite a considerable jump in the cost. It costs more to use 10 gigabytes than 100 and 28 meg. What's interesting though is between 100 and 28 meg and two gigabytes. There's actually very little change in the cost at all. It's pretty much the same, but over that same period of time, the invocation duration comes down considerably. And actually for this specific function two gigabytes is about the sweet spot here.

So the point here is don't be like me when i first started using lambda and just think, well, 100 and 28 megabytes is gonna be cheap because it's the cheapest per millisecond. So therefore, i'll just use that actually. Look at your specific workload, use tools like power tuning to work out. What is the sweet spot? Typically for.net, a gigabyte is about the place to start. So if you start there and then optimize out from there, you typically get the best, the best performance.

Now, of course, once you've done this and you've got the right memory allocation, the performance still might not be fast enough. You might want your functions to feel like them drivers that were here last weekend speeding around vegas, you won't want them to just be, just be fast. That's all you want. You just want pure performance and that's where native ahead of time compilation can help you out.

So who's heard of native a ot or use native a ot at all? Ok? A few times? Cool. Um so native a ot is a new feature of.net that went g a in.net seven and they've improved upon it even more in.net a and it allows you to generate a native binary of your.net application which removes the need for git and it dramatically improves the start up time of your application. And when i said dramatically, let's revisit them same benchmarks from earlier.

So here you're comparing a.net six single purpose handler with the exact same function natively compiled. In.net eight warm start numbers are comparable pretty much the same dot net's fast. You all know that at call start the numbers are pretty dramatic. That's a 62% improvement in performance at p 50 which is the 50th percentile and a 59% improvement at p 99 consistent sub 405 100 millisecond code starts with dot nail number. This is comparable to the golden time using the same application architecture.

With.net a microsoft took this one step. further. Microsoft announced limited support for a sp net and nat a t. So let's go back to them. Same numbers again, running minimal api on lambda, with.net six. The exact same minimal api natively compiled with 0.81 start numbers are the same called start again. A dramatic improvement in performance. That's a 71% improvement in start up time at p 50 a 53% improvement at p 99.

So the numbers when you're using native a ot at cold star are dramatic. The improvement is fantastic. It's an absolute game changer for building serverless.net applications.

Now, of course, if you're familiar with.net native a t, you'll already know this. If you're not, there are some trade offs that you have to take on when you use native a rt. So let's actually explore how you turn on native a rt with lambda. And what the tradeoffs actually are that you have to deal with.

The first thing you'll need to do is make sure that the assembly, the executable that you generate is called bootstrap. So typically when you're using lambda, uh a n or t with lambda, you'll need to use a custom run time. And the way custom run times in lambda work is that they just look for a file called bootstrap and they just execute that file. So the binary that you generate will need to be called bootstrap. And then you need to make sure you set the, publish a rt flag to true. This tells the.net compiler to actually natively compile um this dot application.

Now, one of the, the trade offs that you have with native a rt is that you need to compile your application on the same operating system and processor architecture that it's gonna run on when it's running. So how many people are using amazon linux two or amazon linux 2023 as their development machines? Nope, nobody. That's a thought.

So what does that mean? How do you then compile your lander functions to run on amazon the next two cos that's what lambda uses under the hood. When you're not actually using it, we've actually built this into the.net lambda tubing. So if you're using the lambda cli for.net, and you're using either of these two commands to package or deploy your function, this functionality is built in the cli two, then will look for that, publish a or t flag. And if it detects it, it'll download a docker image of amazon on the next two, it'll compile your application inside that running docker container and then spit the executable back out onto your local file system. And if you would then go and run that excusable, of course, it would fail because you're not using amazon the next two.

So that's one challenge we've got around, we can now compile our code on the right os. The next challenge you've got is that if you're using a custom run time, that the.net run time isn't there? It doesn't exist. It's just an empty, an empty lander environment. So you actually need to bootstrap the.net run time yourself.

So if you're using native a rt with lambda, you'll need to add a static main method to your application. That's the entry point that that ex is gonna use. And inside that static main method, you'll need to make sure that you bootstrap the lambda run time and you can see that they're using the lambda bootstrap builder dot create method. You're passing the handler that you're gonna use and then you're passing the c realr that you want to use to do that.

Now, of course, this is more a boilerplate code that you need to work with. You need to add this when you're using native a lt. If you're not using native a lt, you need to get rid of it. It's a lot for you to think about right? Unless you're using lambda annotations framework because lambda annotations as of about three weeks ago has a new lambda global properties. Um attribute. And in that attribute, you can set the generate main property to true. And what this will do, we compile time is use source generators to actually automatically generate that static main method that you've just seen. Which means you don't need to worry about adding that bug, the plate cord, either that code you can see there is completely ready to run on lambda natively compiled that is ready to go.

And one of the cool things about this is that it actually means you can move between different versions of.net between native a lt, not native a lt. Without actually changing a single line of your application code, you might just switch this flag from true to false. You might change the publish a ot in your project file. But actually, you don't really need to change a lot of your actual application code under the hood. This is what actually gets generated by lambda annotations.

Now you don't really need to see all of this. What is important is this switch statement at the top here. So the static main method that gets generated uses a switch statement to determine which handler is gonna be used. And what this allows you to do is to generate a single binary. You can have multiple lambda functions defined in that same binary and then deploy that to multiple different lambda functions. And then you just need to set the annotations handler environment variable to determine which actual um handler, it's gonna be used and we're going to improve upon this more next year to kind of make this simpler and better.

The final thing you need to think about with native a rt is js on c realization and dc realization. So when you use native a ot, a lot of the reflection based functionality in.net and the way the trimming works actually means that newton soft js on system text js on, they don't actually really work anymore. You can't just do you know jslc realizer dot c realize or dc realize. So you need to make sure you're using source generated c realizers. This is a feature that came to.net, i think it was.net five and it allows you to generate the code required for manipulating js on at compile time.

So you add a new partial class to your project much like earlier. You can call this whatever you want banana again, if you wish, um i would recommend again using something sensible like custom serialization context. What is important is again the base class that you inherit from. So you inherit from the gs on serialize a context um class that comes from the system, text js on name space. And then you add some attributes to this class and these attributes determine what objects you want to be available for js on serialization and des realization.

So you'll see here, i've got two classes custom to my application code, the lawn and the lawn wrapper. And then i've got two objects that are actually custom to la mda, the actual api gateway event payload that comes in. And you need to make sure you do this for every single class that you need to js on c realize or dc realize there's lots of good documentation on microsoft's site about how to do um source generated c realization.

The other important thing that's there for lambda is that line at the top. You can see there that lambda c realr attribute. So you actually need to tell the.net run time to use your source generated c realr. So you need to make sure you set your lambda c realr to be the source generated lambda js one c realr passing in your custom context as the type argument"

And that's it. That is all there is to enabling AOT on Lambda. Update the assembly name, set the published AOT flag, make sure you're generating that static name method, and make sure you're handling JSON serialization and deserialization. And you can start to use the benefits of native AOT.

Now, of course, going back to the architecture example from earlier, ASP.NET on Lambda might actually give you the right performance in a lot of use cases. So I'd recommend adding native AOT only where it's absolutely necessary because of these tradeoffs that you need to make.

There's a lot of things, a lot of areas that can happen with native AOT that will only actually appear at runtime. For example, the way the trimming works - if you've got a property on a class that isn't necessarily used anywhere in your code, the trimmer may then trim that out. So only really use it exactly where it's necessary. And if you are using it, make sure you test your application.

And testing with serverless applications does become a little bit different because typically serverless applications, as Craig demonstrated earlier, we're using lots of native service integrations. You might have SQS, SNS, EventBridge, and you don't really want to be emulating all of these services locally and trying to end-to-end test your entire application locally in the same way you might do with a more typical ASP.NET SQL Server type application.

So the common mantra with serverless is to unit test locally, test everything else against actual cloud resources. And what that means is that you might write a small number of unit tests to test kind of the core business functionality that you have. And then when it comes to running more robust tests, push them actually out into actual cloud resources.

And I'm not gonna spend too long on testing. I would point you towards DEV308 later this week with Dr. Drouin - he's doing an entire session on testing .NET Core native .NET applications. However, if you are using .NET and Lambda and you're using the Annotations framework, just make sure that you're writing your functions in a way that is testable. And Annotations allows you to do that because you're using dependency injection, you can now mock things that are actually relevant to your function.

So here you could inject a mock implementation of the IAccountRepository, use that for your unit tests, push it out to the cloud and run your actual integration tests against the actual cloud resources.

So now that you've done all that, you're in an even better place as developers because you've got this well-defined, well-built Lambda functions, you've got observability and instrumentation, you've got the capability to get Go-like performance with your single purpose handlers, and really good performance with ASP.NET and Lambda.

And now you just want to be able to ship it. And that's where I wanna hand back over to Craig to talk to you about deploying .NET Lambda functions.

Thank you, James. So now, as you can imagine, having all this awesome Lambda code doesn't really get you a whole lot if you don't have a way to actually get it to the cloud efficiently. So for the remainder of the time, let's explore how you can take your production-ready .NET serverless Lambda applications and get them into the cloud and run them as efficiently as possible.

Once you've started to run your applications in the cloud, your thinking has to go beyond what server is this going to run on or what folder I put this DLL in. Cloud-based service applications are highly distributed, event-based applications. They don't all live in the same process. They don't live in the same hardware and frequently not even in the same data center, but that's all abstracted away from you.

Your .NET-based service applications are going to have, of course, lots of Lambda functions - and it could have dozens or more depending on the complexity of your application. But there's also going to be other services - for instance, supporting services like storage using S3 or DynamoDB, communication services like API Gateway or SQS, or security services like Cognito. There's a lot of stuff here. How do you manage it all?

And the answer is infrastructure as code. To help you better understand how infrastructure as code can help you, let's look at a typical piece of a serverless .NET application in the context of our banking app.

So in this example, you see a customer - they upload a document which eventually ends up in an S3 bucket. That triggers a Lambda function which does some processing on that document. And then when it's all done, it publishes to an SNS topic to notify downstream subscribers that it's all done.

So let's look at how you can actually model this in a couple of different infrastructure as code frameworks.

So because of the popularity of serverless, actually a number of infrastructure as code frameworks have come about specifically to help you build those applications. And one of them is AWS SAM, or the Serverless Application Model. It uses a declarative template syntax which is a superset of CloudFormation.

And what you see here is a real piece of SAM template that defines the infrastructure that I just talked about a moment ago. Here's where you define that bucket, and here's where you define your SNS topic, and the place where you define your Lambda function.

But one of the powerful parts of SAM is that it gives you the ability to simplify how your components interact with each other. You just express that you want the bucket to trigger the Lambda function and the Lambda function to publish to SNS. And then SAM does the rest for you and generates all the code in the background that you don't have to.

And it also applies the appropriate permissions to your Lambda function, so it only has the minimum security permissions needed to perform its task.

Now, the Cloud Development Kit, or AWS CDK, is another infrastructure as code framework that people frequently use to build serverless applications on AWS. And one feature that stands out about CDK is that you can build your infrastructure as actual code using your favorite programming language.

And my guess is that for most of you in this room, that would be C#. And this is a bit of C# that again defines the infrastructure that we've been talking about. You can see that we here instantiate an S3 bucket, here's that SNS topic that we create, and the function that processes the data.

And similar to SAM, you just express how the components interact with each other, and then CDK will do that wiring for you. And ultimately, CDK produces CloudFormation in the background, but you don't need to worry about that. You can produce all of your infrastructure as code using your favorite programming language, C#.

So I just talked about how you can actually create the serverless application in two of these frameworks. But in reality, there's a lot more of those that you can use - many customers use things like Terraform, CloudFormation, Pulumi to build their serverless applications.

And the question I get asked a lot is how do I pick the right one for me, and how do I get started? And of course, my answer is, well, it depends. But we do have some opinions that can really help you make that decision process easier.

So if you're a .NET developer and you want to have your environment be .NET all the way from the beginning to end, then a code-based framework like CDK is probably going to be right for you. If your company is already using CloudFormation, then SAM has really good developer tools and the templates you create aren't really going to be a whole lot different than what your infrastructure people are already using.

And if you're using one of the more enterprise frameworks like Terraform, you might even investigate a hybrid solution. For instance, Terraform has an adaptation for CDK where you can write your infrastructure as C# code and then it will output HCL.

And ultimately, you're probably going to be able to find something that works for the developers and for the infrastructure folks.

So now you have the basic tools for composing your application. When you start building your application, it may start small - you may give a way for your customers to get account details, then allow them to apply for loans. Then of course, you will need to underwrite those loans and process the documents they upload for them. Then you definitely need them to be able to make payments, and then you want to be able to notify them when payments are due or you have offers for them.

And so there's a lot of stuff here, but it's probably just the tip of the iceberg in your application. But because you have infrastructure as code, everything's awesome, right?

Well, if you're not careful, even though you've replaced your monolithic application with serverless, you may arrive back where you started from before. The templates can get out of hand and the application may become so complex that nobody actually knows how it works, and it could be risky to deploy.

So this is a real issue you can face if you don't have proper planning. So you can make this more manageable by using a concept called domain-driven design. And that's a methodology that allows you to model software systems based on the business function or domain. And this applies to monolithic applications and also to distributed applications like your serverless apps.

And if you think of your current and future needs, you can determine what the logical boundaries are within your application, and then set out to build each of those as an independent serverless application.

But don't go overboard - just because you can slice and dice your serverless application to the nth degree doesn't mean you should. The key is to be sensible and practical, and use those domain-driven design principles to make sure that each of those components and those serverless applications performs the right business function.

But what's the recommendation for the proper size of a serverless application? And of course there's no hard and fast rule, but if a serverless application spans more than one team or does two kind of unrelated things, then it's a really good candidate to split up.

Now, the DevOps folks in the room are probably thinking, well, this stuff is great for developers, but now I got to learn a whole bunch of new stuff just to be able to deploy all these applications. And I have really good news for you - that you can basically use the same tools and practices to automate the deployment of your serverless .NET applications that you're using now.

You can use familiar tools such as AWS CodePipeline, Azure DevOps, GitHub Actions, Jenkins - really any of the standard CI/CD tools - and use standard automated deployment patterns.

So what does the pipeline look like for a typical serverless application? It's probably not a whole lot different than what you're used to. So you have to ensure that your infrastructure as code tools are installed and initialized. Then you have to use standard .NET tools to package your .NET code. Then you prepare those serverless artifacts, and that will do things like bundle your bundles so that you can deploy them to multiple environments.

And then you use your infrastructure as code tool to deploy your infrastructure or deploy your serverless stack. Then this will provision your infrastructure and your application code.

So let's take a look at a couple of the popular infrastructure as code tools and look at what it takes to author a pipeline for each of those. We're going to look at SAM and CDK, which I already talked about, and Terraform, which again is a really popular infrastructure as code tool that people are using.

The first step is to install and initialize your infrastructure as code framework. And you can see here that there's really just up to maybe a couple of commands that you need to run to be able to initialize the tools. And in most CI tools you have, you can take this and make reusable templates that you can use in other pipelines.

Next, you're going to build and package your .NET code. And this would include things like running unit tests and security scans and so forth. And you can see here that it is literally identical between the three frameworks. And that's because you're using standard CLI tools for .NET to automate this.

Then you prepare your serverless artifacts. Now there's, it's going to be a little different for each of these tools and they'll have some different parameters and so forth. But they essentially do the same thing - SAM has build, Terraform has plan, and CDK has synth. But these ultimately create artifacts that you can either save to a central location to deploy them later, or you can deploy them later in your pipeline.

And then finally, you're going to deploy those artifacts to the cloud. Each tool again is slightly different in what it takes for parameters, but overall the experience is going to be pretty similar regardless of what you use.

SAM and CDK both have the deploy subcommand, and this ultimately deploys a CloudFormation stack. And Terraform uses the apply subcommand to deploy the plan you created earlier and update the state file.

That's it - those are the four steps that you need to build a pipeline for your serverless applications. The final step of course is to celebrate, because now you know how to repeatedly build and deploy your .NET applications - your serverless .NET applications - and you can see how easy it is.

You just write the code like James showed you, you choose your infrastructure as code tool to model your application, and deploy your pipeline. That's it.

So I know that this talk has kind of been a lot of stuff, but if you only take away a few things from this, I want you to remember that even though writing serverless .NET may be a paradigm shift for you, once you have these best practices, it's easy to produce great performant .NET serverless applications in AWS.

Use those native tools and service integrations that we talked about to simplify your applications. Use the techniques available that we talked about to optimize both the performance and the code of your Lambda functions.

Adopt libraries such as PowerTools and Lambda Annotations to help you keep things simple while retaining the functionality that you need to effectively operate these.

And pick the right infrastructure as code framework for your application and business and team, and use domain-driven design principles to structure your applications in such a way to keep them maintainable.

So we've got tons of great documentation on .NET on AWS and serverless in particular. And James and I have put together a curated list of items over at serverlessland.io - so take a snapshot of this, it will be some good reading material later as a follow-up to all this.

And if you want to talk about .NET on AWS, serverless, anything like that, come on over to the .NET Village for some time during the rest of the week and us or our colleagues are going to be there and really would love to talk to you about what you're doing and answering questions and all that.

And finally, we're really passionate about helping you with your .NET on AWS, so it would be really great if you would fill out the survey in your app, and that feedback will help us make talks like this better in the future.

And I'm happy to say that now all of you have the tools you need to effectively build production-ready .NET serverless-based applications and deploy them to AWS. So enjoy the rest of re:Invent, and thanks for coming to see us today!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值