Test automation for .NET applications running on AWS

Today, we're going to talk about taste automation. Imagine the following scenario: you've been walking endless days, sleepless nights. You just deployed your application to the cloud and something bad happens. One of his services crashes as immediately as it starts up and then you know what it means. It means a lengthy debugging session trying to figure out what wrong - debug, deploy, debug again, redeploy, read 100 lines of code, 1000 lines of logs, figure not out, figure in what went wrong and what you can do about that.

What about a different scenario? Let's say you've been walking, creating a new feature and you get a phone call and you've been told that a feature, you know, you finished and walked a month ago, stopped walking. And after you're trying to figure out what reason you stopped working, you realize that a fellow developer changed something in his code, accidentally broken your functionality.

Now, my name is Dr Helper. I'm a Microsoft specialist solution architect working for AWS. And those things happen to me more than once and I see it happen to my clients and there's bound to be something we can do about it.

Now, in this talk, I'm not going to show you a new AWS service that will solve those problems. Instead, I'll show you a strategy, a way to use automated tests in order to guarantee that those things won't happen in your project. I'm going to use ASP.NET Core microservice based architecture and AWS Lambda to show you the key things that are the same and the key differences between the two when you want to write automated tests for them.

Because testing a microservice architecture is not reveal thumb for it. Usually you have a service and that service has a database, but that's not microservice, that's a service. So probably you have another service being called by the client or by the first service with its database. And that's relatively simple. We're still in a very simple thing to test, but we also have asy calls, maybe we're using queues in order to send messages to another service which process those messages in the background and those things do not run on your machine, right? They will run on a certain cluster inside docker images or on AWS in Elastic Container Service or Elastic Container Service. And then more likely than not, you're using AWS services as well such as S3 to store your files. And most such architectures also call external services, another team perhaps created services you use or another company altogether. Testing that kind of architecture which only have three services at this point is far from trivial.

Some of those cannot run on my machine. Setting up this system successfully is hard. Even if something breaks how to find where, where it broke. What's the problem?

Now moving to lambda functions. AWS lambda serverless architectures start in the cloud. Usually you have a service that will trigger your lambda function, something that will call your code. For example, let's say we have an S3 bucket where we upload files and when a new file arrives, a lambda function will wake up, process that file and store the results in a database, let's say DynamoDB, and other users can call HTTP calls can call API Gateway to get information about those files or maybe perform some operation and get the results back and all of their run inside AWS.

So the question is, where are the tests? Where do I run the test that test something that is outside of my machine?

Now, before we move forward, why do I talk, talk about testing? Who is a C# developer? Raise your hand. Ok. Who writes unit tests? Nice number.

We're talking about automated tests because basically it's not your job to write tests. There's something called a job. We all called a tester, but it's your job to guarantee that your code works, that you could perform as you expected, high in the same quality. I am implementing the features you are required to implement and not only today but continue to function tomorrow. It's part of your job to make sure that your code works. Period.

Other than that, you want to make sure that your code works because the, the thing i really hated as a developer is when someone will grab me in a way and pull me back to fix something i implemented a week or a month ago and stopped my progress forward. So the more you embrace the idea of writing automated tests, the less you'll find yourself moving backwards, fixing things that you already finished and you'll move faster, you move forward using tests.

Now, there are different kind of automatic tests depending on what you're trying to achieve. There are tests that to test your entire system, your entire workflow end to end. And those tests, you see them quite a lot when you have an automation team and they will set the whole application up all the microservices, all of the system, put something in one end and check on the other and that something happened. But there are also interaction based testing. You want to make sure that the different parts of the system knows how to talk to one another, that the message coming from one end will be received on the other side correctly. And finally, you have the logic you have your logic that you want to test and each of those layers is being tested by different types of tests. You have unit tests to test your logic, you have integration tests to test interaction and you have end to end tests to test your entire system.

Those names tend to change depending on who you talk to or which book you read. But for the sake of this session, those are the names i'm going to use. And anyone here ever saw this testing pyramid? Ok. Few. The test in tell me there is a concept introduced by Mike Cohen in 2004, almost 10 years, 20 years ago. And the idea is that the higher you go the pyramid, the less tests you're supposed to write.

Now it changes between different projects. But the reason is that the higher you go, the bigger the scope you test. Now, that sounds like a good thing. And it is because the more I test for my code, the more real world kind of scenario I'm implementing and automatically testing. But the biggest scope and testing, it means that I can test less scenarios, less different things, especially corner cases. Testing that one of the services misbehave and crashed is something that is very hard to test end to end, very easy to test in unit test.

So I need all of them. Run time will increase as you go up again. Not a surprise unit test is a very small test that will run less than a second and then test can run a few minutes. Maybe it doesn't sound a lot, a lot. But a 10 minute test means that six of those will run for an hour. And I don't know about you, but I'm not willing to wait an hour every time I fix something in my code. It also means that you can only run the test seven, maybe eight times a day and it will hurt your productivity. It's not a good use of your time.

And finally, maintenance over it. As I said, it is not our job to write tests. It's not your job to write tests. You need to make sure to ensure that your code works. So you want to make sure that you spend as little time as you can with those tests and it's not about writing them. Writing tests is simple. It's easy if the problem happens when you change things in your code and you want the tests to either change or continue function, you only want them to fail when you have a bug basically.

And so the higher you go in the pyramid, the harder it is to maintain those kind of tests and tests can fail for multiple reasons. Anything along the line of that specific scenario, a unit test not only will fail for a very specific reason but very easy to delete or, or fix because it's very little. It's very small piece of code.

On top of that maintenance is also finding what went wrong when test fails. You want to immediately know what went wrong. Where is it to do with unit test? With end to end test? You probably need to read, it could take even a few hours, understand why end to end test failed.

So we want all those three levels. Now, a good test needs to be able to have the following properties. It needs to be readable, maintainable because you are going to read the test, not today when you wrote it, but tomorrow in a month, in a year when it, it will fail. So we want to make sure that you can understand what the test does even after you forgot what you implemented. And also if your test fails, you want your fellow developer to understand what the test does as well.

A test needs to be deterministic and predictable. And that can be summarized in one word - trustworthy. You need to be able to trust your test and know that when a test fails, you have something bad in your system, something wrong, something that you need to fix. If you have a test that tends to fail and then pass, then you get used to the fact that when it fails, what do you do, you just run it again. And at this point, that test does not matter anymore. And if you have a few tests that fail and then pass you all test with lose its effectiveness, you want to be able to run the same test. 1000 times one after the other, get exactly the same result. You want to be able to run your test in any order possible and get the same result. You want to run your test on your machine, on your fellow developer's machine, on your build server and get exactly the same result.

And finally, your tests should be simple to run with the click of a button. They should initialize whatever they need for the test. They should run the test, they should clean up afterwards. You don't want a word document explaining how to set up your testing system. I've seen those. You want everyone in your company to be able to run the test, essentially offloading that problem to the whole team so that if something bad happens, everybody can know, can understand, can run those tests before fixing or after fixing a problem.

So davin is the first type of test - unit - the most common or well known, at least when spoken of a lot of people talk about unit unit test. Test only logic. The focus of the test is the test that the code, most of what you do in your day actually works. A unit test needs to test a small piece of code, a class of few classes go up together and you want to extract to remove all every external dependency. You don't want to call a database from a unit test. And we do that by sort of grabbing a piece of our code and use a unit testing framework where we initialize that part with its dependencies, fake all the external dependencies using a mocking framework. And we just want to focus on our logic.

And this is an example of a unit test. And many of you written something similar to this. Units essentially is basically a method. It resides within a class that is sometimes called a test fixture. And the only thing that makes this method special is the fact that it has an attribute that's marked it as a test with that unit. In this case, we use the word test to mark a specific methods as a test. So the test runner will know to find those methods and execute them. You can do that from within Visual Studio, from your command line. You name it. Anyone, any software that can run any unit test can find those methods and run them. And that's it, it usually doesn't return anything, it's public void, it has a test name. If you write a good test name, you will be, it will be easy for you to understand why it has failed when it will fail. And then whatever you put inside is basically your test. It doesn't even have to be a unit test. Basically just a unit testing framework is a method running framework. That's it.

But we come to understand that the structure of the unit test should probably look something like this. It's divided into three distinct parts. The first part is where I initialize my system. I create my fake object. The first line I'm creating a fake repository that is the class that's supposed to call my database. And then I set that fake object and that's an object that does absolutely nothing. And I set it up so that when I call a specific method findById with a specific parameter, it will return null to show or to emulate the fact that that specific object is not in my database. And then I create two real classes that I use inside my test, manager and controller. A lot of managers going around in object oriented code. And get to the next part.

The next part is the play button. That's where I run the scenario. I'm trying to test. Also known as the act.

And finally assert - that's where I test the result. In this case that if an object is not inside the database, then I should return 404, not found a result as simple as that.

And those three parts should be used when writing unit because it makes it easy to read the test. And once you know about arrange, act, assert - also known as AAA - then you can look and immediately see them as if I've written the three parts.

Individually, I've seen people use comments to divide them up to you. But you get to a point where you recognize the range, you recognize the act, you recognize the asset and it makes the test more readable.

And this is for testing sp.net, in this case, uh service and a test on itself. A single test has no purpose whatsoever. I need to write more than one. I need to write a couple of tests around that functionality to cover all the cases. Make sure my code works.

Going on to London. This is one way to write c# landers. There's at this point, either four or five different ways to write seers and a lambda. Get this lambda get triggered by a new file in s3. Save the data to dynamo db. It has a parameters default constructor where i initialize. Uh the depends i need to read for s3. So i have an s3 client. I have dynamo db client as well. And then there's the function handle. That's not the name. You don't have to use that. That's where my code reside.

How do I test this? Well, I need to get rid of those external dependencies. I don't want an s3 bucket. I don't want dynamo db s3 bucket. Relatively simple. I replace it with a unit testing framework. The test will run my lambda function. I don't need to trigger it for my s3 with dynamo db. I'll use a mocking framework to create a fake dynamo db client and then end up with something i can test. But before that i need to create a constructor. So i'll have pens injection that enable me to push from within the test to my code to fake objects. So i won't call s free and i will call dynamic b and the test will look something like this and it looks exactly like the previous test, just different code. But the same ideas, i have the test with the method. This time, it's a sync task because lambda functions are s signature. If you want uh i have a range where i'll create a f free client, i'll create a fa o db client. I'll create an empty event, the lambda context that i need and then initialize my lambda functions very easy to unit as lambda functions because of that just using you. And then in act i call the function handler. And in this case, it's not an assert but i'm using the mocking framework. It's fake. It is in this case in order to make sure that the dynamo db client was called or did not call the method that will save data in dynamo db. Essentially what it does is if you get an empty event for some reason because of the problem or an error or by intention, you shouldn't save anything in my database. And it's as simple as that.

So what do unit tests have to do with the cloud? Because those two unit tests are very similar. If i wouldn't told you about the second one you probably wouldn't realize in the beginning. At least that it's a lambda function. What does units then have to do with the cloud? And the answer is absolutely nothing. The unit tests have nothing to do with the cloud. And that is a good thing. I can run those on my machine without needing anything, not an a double s account. I don't need to provision anything. I can make sure that my logic work from the comfort of my own machine. And that's ok. But that's not the whole story. Just the beginning. It's a good foundation. If you're only going to write a single type of automated test unit test, probably where you want to be because most of the code you're going to write, you can test with those unit tests, but it's not enough.

Looking back at uh a pseudo p.net co application or service, we have a client on one end. We have something at the bottom databases, a w services, you name it and we have the logic that's the code you write from the controller all the way down that we just tested it in unit. But there are the parts we have the presentation layer. That's everything that happens in between a message being handled and the request being handled till it arrived to your code to your controller. And you have the data layer which is all the logic of how to talk to the outside world and you need to test those two as well.

And starting with the bottom part, we need to test that. I reposit our client when i use htp client or the aws dk to call services or some object relational mapping to call my database. I need to make sure that things work, especially the complex thing. If i'm not sure how to save the something dynamic or db or i've created a very complex uh query, i want to make sure that they work. I don't want to leave them untested. And in order to test them, i still will use a unit testing framework. And the lambda function um in lambda function, probably you do have the same layer if they're the same code or otherwise, because the lambda function on its own has no benefit. It probably need to send the data forward. And as i said, i will use the unit testing framework because i can run my code and this is still code. I won't test the whole service. I'm just testing the part where i'm calling the outside world because i already tested the logic with unit. So there's no need to do that again. That's just wasting my time.

Again. It's not our job to write tests. It's our job to make sure that our codes does what we wanted to do. And this time we're not faking the external dependencies because they are the focus. The reason we write the test, we need actual external dependencies. And this is an integration test and it looks exactly like a unit test other than the fact that this time i'm not faking the shopping cart repository, i am creating a real one. It, it will call an actual dy dynamo database in this case. And uh i'll create two shopping carts and then make sure that my query granted not that complex in this case returns the right answer.

And the reason i'm able to do that is because i use another capability of unit testing framework, not only is the unit testing framework able to run functions, it can also have special functions for set up and tell them, set up will run before each and every test, tell them after each and every test. And when i'm using external dependencies, i really want to use those not so much in unit by the way, but in the external dependencies, i don't really care how i initialize that dynam database. I just care about my test, right. So i just put the code responsible to get the connection thing of already existing database. And we'll get to that and i added code to clean up between tests because remember one test should not affect the next one and i should be able to run them many all day. I want to. So i need to make sure i clean up after each and every run. And by doing that i'm able to test right, the same code as unit basically. But with real object and real dependencies.

But there's a problem with external dependencies. It's not easy to use external dependencies. External dependencies are a headache. First of all, there's complex initialization. I need to set up a database. I need to fill it up with information. I need to clean between tests. Uh that complex initialization is not always possible to automate or run from within my test. Um and on top of that, it will all take time setting up a database from scratch, probably won't take less than a second. Probably can take up to a few minutes. On top of that, i also have uh a synchronous operation when i'm using external dependencies. Let's say it's not a database, maybe it's a queue. And finally, it's very costly and probably an integration test will run for a few seconds or even a few minutes, something to consider again, even one minute running test means that six of those will run for an hour and that's painful. And finally, the shared state and that's the real hazard of using excel dependencies. When i have shared state. If i have one database for the whole company, everybody runs the test. Happily, someone is going to fail for no apparent reason because two developers run the same test at the same time and one accidentally tripped the other by deleting the item. The first one was expecting to see, or your build server just run a build and you try to run the test and one of you failed or maybe both of you. So the shell state is the real problem with excel dependencies. And we need to somehow accommodate somehow solve that problem depending on dependencies.

So how can we do that? Well, let's imagine a case where i have a test. I have my code and then that code either reads or write or both to a database now. And a lot of companies uh find it how to set up the database. So they have the build machine, the dev machine, the unit testing framework, running my integration test and the database running somewhere in the data center or in the cloud. And that is a problem because everybody is going to read and write to that database. You don't want that, you want essentially to have the database on your machine. You don't want to share it with anyone else and you can do that.

Well, first of all, you can install whatever you use on your machine, you can install the dynamo db, you can install sql server on your machine. But that's not good enough. I want to be able to start up the database before the test and shut it down immediately as a finished running in the test, leaving nothing behind to somehow cause me grief when i continue development to run another test. And one way to do that is is in do i have been very successful with using dock to set up external dependencies and then shut it down immediately as i finish running test.

And you can see this piece of code uh quick and dirty one, there are other ways to do that. But the idea is uh the trick here is that just like i had the setup method, the every single unit testing framework has the notion that there are some setups that need to happen only once before the test run. And then you need to use the one time setup attribute and the same for tail down, you have a one time tail down attribute. And in this case, i'm using an external process to run the command line that will get non go db uh image docker image from docker r in this case and start it and then wait until the database is up and running and ready to receive information. Pro this thing will take up to two minutes uh on my machine

And when I'm done running all the tests, it will just dispose this Docker image. And that way I have my own database, uh, that only runs during tests. So the first time the test runs, it will take slightly longer. But then I use the same database instance for all my tests one after the other, but that will work with certain dependencies, not all of them, Grant, especially when talking about cloud services, right?

It's reinvented. We have to talk about cloud services. What happens if I'm using DynamoDB in again, that Lambda example, I need to save things to DynamoDB. Uh, so getting rid of the three bucket, we already did that. I'll replace it with a test. That's simple. But the problem is that while my code runs on my machine and the test runs on my machine, it calls AWS account, hopefully my development account, and create a table there and then start writing and reading stuff from it.

And that also means that anyone else that runs the same test or same group of tests will do the same and all of us will merely trip one another when we run the test together in parallel. And that is painful. I don't want that. But having multiple accounts, one for each developer, is rather costly depending on the number of developers you have on your team. So you want another way to isolate those test runs from one another.

One way is simple, just create a sit a different table for each test run. Uh add time, stamp a good at the end of the table name and it works with multiple other AWS services initialize it created in the one time set up in the beginning before running all the tests, run your test and then delete it once you're done. And that way multiple test runs will not affect one another. That's one option.

The other option, the second option is to use local emulated services. What are local emulated services? Basically, something that does a good impression of an AWS service with DynamoDB. We have DynamoDB Local, that's a, a piece of Java code that you can download and run, uh, within Docker on your machine. It's up to you and it does a very good impression of uh Dynamo database. You can call it as if it's the cloud service and there are both commercial solutions for different AWS services. All of them are Dell just Google and you find those and there are also quite a bit of projects on GitHub doing the same thing. You can find the local emulated versions of the AWS services.

And once you have it locally, most of them can be either triggered from your code or inside the Docker image and you can use them to make sure that you multiple build and test runs will not use the same uh instance, everyone gets his own AWS in his own machine.

Now, those emulated services, your mileage will vary. Some of them do not implement the old cap all breadth of capabilities of the service they're trying to emulate. All of them will not replicate AWS roles and permissions by definition run on your machine. So don't count on them to find problems in that area.

So how, how do you decide between the two? How do you decide whether to use real services or the emulated version of those services? Do I run my code with the AWS for testing purposes or do I use some local version I found on GitHub or bought from another company or got from AWS?

Well, there are differences between those two. With using real services, they are very accurate because it's an actual real service. Uh it, it works exactly as it will work in production, including permissions, including the API all the calls. You expect them to work very simple to initialize. And probably all the thing, the only thing you need to do is make sure that you use slightly different configuration to make sure that you write to the S3 bucket to the Q or the DynamoDB table that you use only for testing and not for production.

Now, the downside is that um it's not free. Well, it is free because we have a free tier. But if you have a big company with a few 100 developers and your build system is working all the time, you'll see those costs. It's probably the services I just mentioned that won't be a lot, but um it will cost money and there's latency because you call in a cloud service again, not a lot of time. But if you have 1000 tests such as a nice or more in several code bases, then those times tend to accumulate. And in the end of the day, you have to remember it's not your job to make sure that AWS services work. We have engineers for that. It's your job to make sure your code calls those services correctly.

And so the challenge again is the short state. How do you make sure that different tests running by different users or different automated build systems do not hurt one another?

And on the other hand, using emulator, well, emulators are highly consistent, although it depends on the specific project you use, it means that they probably will perform exactly the same because they tend to either save in memory on this or do some trick around the data you save or read or use there. They are very fast. This is the code running on your machine on your build server and you can isolate the different test runs because every test run gets its own emulator running.

On the downside, it's only partial functionality that partial functionality again could be im os. It also can be part of your system. I have noticed that e services such as S3 um harder to mimic correctly all the API s the correct way because um I guess it's a tougher problem than d a modi b. Um so you need to check whether or not that's a good idea for your project. And the challenge here is usually in initialization because it's not just calling the AWS SDK, please create a new table, create a new bucket, create a new um a new queue but you need probably to either use Docker, call it from your code, start a different process in order to start that emulator and need to be able to clean up afterwards. You don't want it sticking around after the test finish running.

How do you choose between the two? Well, the good news is that if you have been using those set up and tell them method and configuration your code, even if you decide to move from one to the other, your test will remain the same. Just a matter of deciding how to initialize and where to point your code at. This is the queue you need to talk to and then point into your machine. And if some point you want to move to the cloud, you just tell it to walk with the queue you just created in the cloud. So moving between the two is relatively simple.

Another thing to choose in between is depending on the size of your development team. Because if you choose to use real services and you have a few 100 developers, you need to think how can you do that with a single or multiple AWS accounts? Because at some point, you start hitting the limits of the amount of things you can create.

Another thing with the real services - remember that some operations are eventual consistent. If you just deleted the S3 bucket it might stick around a bit. So keep those in mind. With the emulators, they're usually immediate because they just mimic the API they don't really do the operations.

Okay. So that's relatively simple. What about asynchronous operations? Asynchronous operations is when something does not happen immediately. And usually our tests are very naive. We expect our test to run, get to the end check the result and with the second operation that is not possible or at least it will look as if it's possible. But you get a test that will pass on your machine and then fail on the build server. And you again get a nondeterministic test and non trustworthy test.

Usually when using queues, when you run the test on your machine, everything will work perfectly. Even if you use the simple queuing service SQS, you'll see everything immediately up everything immediately. You get there, your test will pass, you deploy your automatic build system will kick in and the test will fail. And then you get to a point where you can't trust your test anymore.

So you need to make sure that you do something about it. Now, in that case, usually I have my dev machine, I have the AWS cloud and the queue is not on its own. There's a service or a lambda function afterwards, i will do some processing. So the first order of business with integration test is breaking that into two distinct parts. The first part is the part that everything that happens until I queue a new message. I need to test that the same different scenarios that happen between my and that queue. And then there's a second part that what happens when a new message arrived from the queue to my consumer.

Now, both of those probably are second and we'll solve that in a minute. But by breaking the different dependencies, I'm not, I essentially make the problem easier to solve. I make my test more maintainable. We haven't gotten to enter n test just yet. So always make sure to break the different depends. Don't try to run too much of your code.

But what can I do about this queue? This queue by definition is asynchronous messages queued to SQS, for example, tend to appear immediately, but it's not guaranteed. Especially again, in your test, you'll see everything will work perfectly and then you deploy and something bad will happen.

So what do we do? Let's say I have a test. I have a lambda function. I have a queue. So in my test, I will create a new instance of that lambda function during the arranged part, then during the act part, I will call that lambda function, it will queue a new message. And then instead of immediately trying to grab the message from the queue, I'll have a busy loop running checking for messages. If a new message arrives compared to whatever i expect to get. If I don't get the messages, I'll try again e until a certain amount of time or a certain amount of retry has reached a limit. I've predecided, let's say half a minute, a minute or 100 times trying to do that. And then the test will fail. And that way i'll take something and not deterministic such as a sequence suppression and make it deterministic by doing this busy loop.

And I can do that with the queue because the queue will wait for me any message i put inside the queue is guaranteed to be there whenever I want to try and get it. And it's not always true to every single service and every single depending, for example, with Amazon SNS notification service, it will file and forget messages. You know, I can initialize my test call the la the function, it will call, it will announce a message to SNS and SNS will forward the message whoever is listening, but no one is listening because I haven't gotten to my assault part yet. And again, I have a time based test and a time based test is not deterministic and not trustworthy.

So I can do that instead what I will do instead of this is before I will run my code that will create the lambda function and, and trigger it. I will create a queue either emulated or an actual cube connected to the topic. Make it, wait for the messages for me and we already know what to do with you. We go and pull for the messages until one arrives, until the right message arrives. And that way again, I can take something undeterministic, make it deterministic and it doesn't have to be a queue.

You can also use anything else. You can put a lambda there and save the data in the database and then check in the database which calls arrived depending on what you're trying to test. And how simple or difficulty is this, you can set up additional infrastructure just for your test, not for production. I'm not deploying this kind of thing into my production system. But at least in testing, I can create a queue, get the message out of it.

Is that the end of the story, no, we still have at least one or more type of tests to cover because looking at lambda functions, we are ready, we can split it up as the code before the handler before the lambdas get triggered. The actual logic, that's the code i'm writing and the repository, everything goes to the next one in line outside of my code. And the logic is being tested by using unit and the part where i called someone else, we already tested that using integration test.

But there's a part i haven't tested, i haven't tested what happened before my lambda get triggered. I sort of tested what happens after it give the result. But i'm not sure whether or not this whole workflow actually work, start to finish. I could write the best unit as the best integration test and then run that system. That lambda function happened to me more often than not and find out that the event i expected to receive actually look slightly different than what i expected.

And the lambda function would crash every time a new file would arrive. And I don't want that. I want to make sure before uh I tell everybody, I'm done that it works now until now. I don't know if you notice both unit test and integration test. Don't care where you're running, don't care if it's a lambda function, a console application or an sp.net co microservice, we tend to end test.

It's more involved because here we, we do look at the underlying hosting technology. It's important. It's part of what we are testing now. Can we do that on our machine? Can i write a, can i run a lambda function on my machine? And the answer is yes, i can but probably shouldn't, not for automatic testing. And i've seen by the way, companies that were very successful doing exactly that. But the problem with running a lambda function on my machine, which i can do using s local. If you never heard about that, you are able to take a lambda function and run it inside docker on your machine. And there are other uh different services and we talk about emulation. I can mimic those services on my machine. But i it defeats the purpose of writing an end to test. Because the more effect, the less i'm testing an entrance test is where i am concerned about my messages, my configuration, how different services really work with one another. And of course permissions, maybe i forgot to put the right permission on the lambda function. I need to know that cannot do that on my machine. I want the real application and that's why end to end are different from the other two integration tests and unit tests can be run before deployment should be run before deployment, end to end test run after deployment because i'm testing what happens when i go to the cloud.

So with the lambda function, i will first deploy it to the cloud using whatever method i like in the same cloud formation, telephone, you name it. And once it's on the cloud, then i will run it not from the cloud but from my machine or for my build server to test that specific deployment. And during my test during my act, i will go and put an actual file in s3. I'm not calling my lambda directly because i don't want to test that. I want to test my own system, put something on one end and get the result on the other end. So i'll put the file on s3 and then i'll start pulling for messages from the other end to make sure that my system behave as expected. And the test will look something like this. And again, i'm using a unit testing framework because it's just a very simple way to run my code. And that unit test is not a unit test. This is an actual end to end test running on aws.

Well, the test run on my machine, the actual lambda runs on aws because i am going to use um the aws sdk to take s3 and save a file in s3. And i'm going to use the aws sdk to pull for messages. And that's an extension method the road because there's not enough space inside my slides to show you the code that just do a loop until i get the message back. And i also need to make sure that i'm using the right bucket name and the right sks q name q url. And one way to do that is pass it as inbound variables uh to the test run. But if you've been using um cloud formation, for example, i have tests that go and find that cloud formation template and get that information out of it during the initialization during the one time set up. So i always know that i'm using the right instance. And that way i can also run in parallel because i, i have different deployments and every one of them uh will use the different infrastructure on that. And that's how i do lambda to test with microservices, slightly bigger scope to test when i'm testing the whole system end to end.

I have services that call other services and those services will call other services, all of those with databases. And sometimes i even call things outside of my code that i don't have. Some other teams, code, some other companies code and services that i don't really have on my system for testing purposes. Maybe it costs money to go back and forth all the time. And i do need those tests. I need those tests in order to make sure that my whole system works. But i want to do that. I want to use end to end with microservice, especially if my code is big enough or too big depending on how you look at it only for case scenarios, only for the important stuff or the things that tend to break. I need something in between something between unit integration test and end to end test. And that something is service level test, take a single service and make sure that it fits and implement correctly. All the requirement of that service.

I will still use the unitas in framework in this case, which uh tend to be my easy way to run things, but i'm not going to call my call directly. I'm going in this case with sp.net co probably will use htp or some other form of communication to call my service. And what about dependency? Well, with dependencies. If it's at the service, if it's aws services databases, we already know how to solve that, we know how to set those up for that specific service. You usually will have 12, maybe three. You don't have a million dependencies for a single service. But what about other services? And we'll get to that. But first i need to be able to, to run that service for my test in a predictable way in an easy way and call it as if it's running outside of my test. I don't want to find a weird way to deploy and then try to kill it from out from a different process.

And we can do that because the sp.net team has test host. If using.net five or earlier, you can use test host, test host will enable you to create an actual service top to bottom. You can change the configuration probably you want to because you want to direct it at uh your real or emulated aws services just like with integration tests. And then you can create a client and that is a regular htp client. The nice thing about testos, it will choose the port for you. So you don't accidentally trip yourself by running on a port that already has been used in that machine. And you get a client dot ep client can be used as it is for your test. And if you're using.net six going forward, you can do exactly the same with the uh web application factory and i'll give you pointers uh to source code that does exactly that, but the code is slightly different but does exactly the same thing. Create a service change a bit of configuration for testing and then create a client that you can use in your test to call your code. And this is essentially a service level test and that service level test using that client htp post in this case to create a new coupon and then use htp, get to get that coupon and find out whether or not it works. So i'm testing my api as well all the way down to the database and back up again and i usually don't call it that way. Usually i tend to create client libraries for everyone else to use for other services to use to call my code. And i can use this hdp client in those client library and also test that the code i provide to other teams or for other services works exactly as i expected to happen because what tends to happen is that the url is not exactly what i expected or i made a mistake. And those tests will catch that and i can test the service level requirements top to bottom this way, all the way down and back up again.

What happens with other services. In case of other services, there are projects out there that are similar to mock infer os but fake the whole service. There's a project called mock server. Mock server is a project. You can initialize for your code and use. There's where the mock.net will do exactly the same thing. And the idea is that before i run my test, i will initialize this f for my code and it will start and expect to get messages back. It will return its e url and i can set expectations and behavior. When you get this path with this variable with this parameter, you should return this value. Essentially, it's a mocking framework for a cow. Basically, you could have written the same code. But why someone already did that for us, give us and provide it completely free to use. And once i have set up the message, as i expect in a specific test in one test because i clean those expectations between tests, then i may, i'll make the call to my service. It will call the f server which again, i initialize it and run it on my server. But it stand up waiting to get information out. It will at the values. And if i want, i can assert the result of my service or i can go back to the fact server and ask him, you know, what did someone call you with htp get? And those parameters with this, we swing this is essentially how i use it as an outside of my code mock framework. And it's very simple to use.

Ok. So circling back with test automation, we have different types of tests for different purposes. We have unit test, the test logic, we have integration tests to test the interaction between different parts. Eh we have service level tests which also can apply to lambda function depending on the size of the code. But and this how long is your entire um pipeline or workflow with lambda function? You might want to break it down. And finally, we have end to end test for key scenarios.

Now in order to be efficient, make sure that you're not testing the same thing twice. If i already tested the logic with unit test, no point in doing that integration test. Uh if you have integration tests on the other end, no point. You have testing that you get exactly the message you expect to your database because you have integration tests for that uh send for messages because i we will do that in service or to. So it's a matter of balancing between the two and it's not one or all or all or nothing kind of situation. It's a matter of where you invest your time and how proficient or how easy it is to test your specific system. You have to invest your time where it matters where you have problems where the code tend to change.

Um i have been with companies that didn't have any single end to end test, just unit test, integration test service test, highly successful. It depends on your project basically. And you get to decide and see and if you have all your tests and then still something bad happened, maybe there's a new type of test you haven't been using that will solve that problem.

Now, i did show a lot of code samples. Um but all of those, all the demos, all the different pieces of code i've shown you exist in one of two repositories. The first one is an sp co microservice test sample has free services, two htpsp.net co things and one background walker connected with the q and dynamo db and mongo db there and how to test using unit integration test service level tester in service level test. I also use um spec flow uh for bdd. And the second one, the several test sample is a sample, different samples for different languages. We have java c# dot net and typescript and you can see different scenarios and how to test them c scenarios, different iws services and different ways to write lambda functions. And now your job is to go there and find the ideas and the ways to write code and the way to write test code that will benefit your project. I'll give you a few more seconds if you want to take a picture. But before you leave something very important is, don't forget, uh, to tell me what you think. Go to the application, provide feedback in aws. I don't know if you probably heard that this week more than once. We do care about feedback. I want to know what you think. I want to know what you think about this session. I've been doing that in several ways in different places and i want to hear what you need, what worked for you, which things you are going to go and implement in your project this monday. Ok. So don't forget to go into the application. Uh provide feedback. Tell me what you liked, what you didn't like how it can improve.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值