Application modernization, microservices & the strangler fig pattern

You are here because you want to increase agility, innovate faster and deliver new customer features as quickly as possible. What's keeping you from achieving these goals is the very thing that has created success for your organization in the past - large monolithic applications that have been built over years and sometimes decades are difficult to change, time consuming to test and often prone to errors.

In this session, we are going to introduce some best practices, design patterns and a purpose-built AWS service that's going to help you break out of that monolith and modernize your applications and your teams.

I'm RP Adams. Over here is Adam, who will join the stage in a bit. We've got a lot to cover, so let's get going.

As always, we're going to start with the why. We're then going to discuss a few common design patterns and best practices that we at AWS have observed with our customers who have found some success.

We're then going to welcome Adam who's going to cover the Affinity story. Affinity is an incredible organization that has a product that is at the right time at the right need. Very excited to have you hear the story of Affinity Products and how they used AWS services to find success and move faster. They'll discuss their challenges, the solutions that they created and ultimately the results and outcomes that they're going to share with you today.

If we try to answer the question of why microservices are such a hot topic and why you hear AWS talk about them a lot and why our customers are choosing to go with this design architecture, it's important to start with the business drivers and the technical problems that it solves first.

On the business side, often from the CEO down to the line of business owners and even our customers are always saying "go faster." This is required in the competitive environment that all of us operate in today.

There's a very short window to market opportunity to where you can take advantage of a need and fill that with what you build and what you can delight your customers with. Just being first to market and being fast isn't enough. You also need to deploy these applications so that they're stable and high quality. The worst thing that you could do is deploy applications that aren't available or are filled with defects and cause errors and don't deliver what is promised to the customer. Because the next time that you're first to market those customers will likely not even give you a try.

Finally, be cheaper. And we mean total cost of ownership here. We mean not only what it says on your AWS bill, but what's the true cost of those resources, those builders that are creating those applications? These are four levers that don't move independently. A lot of times when you move one others must follow. But this is what the business is requiring of us.

On the technical side, why we look to microservices - I mentioned the monolithic code bases a lot of times have been developed over years or decades, but they also exist within startups as well. A lot of times when startups are trying to assess market fit, they'll start with a monolithic app, find success, have better defined business domains and will then want to move into microservices.

This creates problems. Monoliths are difficult to change and hard to test just by their sheer size and the number of interdependent paths that exist within the code. A lot of times when we're trying to deliver on our roadmap items in time, us as builders will take shortcuts and we'll put in code that while working is not the most optimal. This builds up over time and generates technical debt.

Then when you go in to try to enhance the feature, add to it, build upon it, you oftentimes run into maintenance difficulties because a lot of the code needs to be refactored before you can even begin.

And finally is our data tier. A lot of times monolithic applications are accompanied by monolithic databases, typically one large relational data store, sometimes with some complex business logic built right in with stored procedures or triggers. Making changes yields unpredictable results. This is where the error prone portion comes in. The difficult to test portion comes in.

A lot of times these issues don't rise to visibility until you actually release the application. And you're told by the worst of folks that would tell you that your application has issues - and that's your customers.

Our definition of microservice is here. I want to touch on a few key characteristics though about microservices.

The first is that these services are small. I mean, it's right there in the name, right? That's why we call them micro. This creates services that are independently deployable and work together and they're modeled around a business domain. They communicate with each other over networks. And this is something that is relatively new in terms of a capability.

And a lot of times you'll hear folks call monolithic applications "legacy applications" as if something was done wrong when you developed them. The truth is they use architectures and resources that were the best decisions at the time. But the computing world evolves over time.

Now we have cloud infrastructure with very reliable and stable networking which gives us and affords us the ability to create these smaller microservices that communicate with each other over networks.

The data of that microservice is hidden inside of that service boundary. All access to that data goes through a contract. Most of the time this is going to be an API. In some event driven architectures, this could be a schema on the event, for example. But all access to the data is behind the service boundary and access goes through that contract, no exceptions.

And then finally, the principles of information hiding - what this means is that anything within the service that is likely to change, changes often, will be hidden behind that service contract so that it doesn't impact the consumers of that microservice.

So why are we factoring in microservices? What are we getting out of it? That's the characteristics. But what does that yield in terms of results?

The first is a reduced deployment scope of impact. If you do have issues with that microservice, if you did make an error, if you checked in a bug, a defect, it's only going to impact that specific service as opposed to a monolith. If you have an out of memory condition that brings down the entire application, everything is now offline. So the scope of impact is greatly reduced by virtue of having these small services that are decoupled and independent.

They provide functional autonomy because they have a single responsibility. This allows a much lower amount of communication paths throughout the organization and the application.

Each of these give increased development velocity. So I want you to think how often taking your main primary application, how often you release that within your organization. Is it days, weeks, months? I've worked with customers that have such large difficulty testing monoliths that they've reduced their deployment frequency to once a quarter.

Now, think about the impact of that on the business. How difficult is it to respond to market events and develop new features for your customers if you're only able to show them anything new once every three months?

Microservices greatly increase the ability to release more often. Adam's going to share some stats on Affinity when he comes on stage. But at AWS we literally have thousands of releases every single day. Now, they might be a bit reduced right now because of re:Invent. We want to make sure everything's running well. But for the most part, thousands of releases every day because we do depend on microservices.

You also attain focused scalability. So if you have a particular portion of your application that disproportionately requires scaling, a good example of this is an online shopping application, right? Typically a lot of the catalog pages, images are served over a content distribution network, a CDN, and the cart tends to be the one that's getting a lot of that transactions - adding, removing, checking out, etc.

And so in this environment, a cart could be a microservice and you can scale that independently either by function, call through a Lambda function or in a dedicated pool of containers that can scale independently of the rest of the application.

And then finally, I mentioned this before, but service separation by business function. And this is really key as you're developing these new features, working with a line of business owners and developing customer functionality - by reducing that down to a single business function, it removes the amount of people that need to be involved in developing that new feature, reduces the scope of responsibility and allows you to move faster.

And we advocate for using a domain driven design approach to defining those service boundaries. And in the end, I'll have a link to some resources to talk more in detail about event driven design and how you can use it within your own organization through event storming.

So all of that sounds great, but we get some common concerns with microservices:

  • This just takes too long. Our monolith is massive. This is going to take years, that's way too expensive. I can't pull all my builders off of everything they're doing now and focus on refactoring into microservices.

  • I just don't have the AWS skills. We're new to the cloud. I have a limited set of builders that have the ability to actually utilize the cloud native services to create these microservices.

  • And finally, I have to change my organization first. I need to redevelop my teams focused around microservices, aligning them to the domains that we're going to define.

There's a great quote here by Martin Fowler that I like:

"If you do a big bang rewrite, the only thing that you are guaranteed is a big bang."

So here's what I want you to take away - think big but start small. Incremental is the key here.

A couple of use cases I want to cover first - foundational best practice. When you think about microservices and you think about that deployment autonomy, that ability to isolate, I want you to go a step further and follow the best practice of refactoring your application's account structure first.

So when we first get into cloud, we first start using AWS, it's very common to have this one single kind of big account that becomes a kitchen sink where you kind of throw everything into it. The ability to create new AWS accounts, the ability to manage them through AWS Organizations and AWS Control Tower is now more automated than ever.

So think about creating additional AWS accounts, not necessarily per every microservice, but at least per every subdomain, which is a very small collection of microservices. But I work with a lot of customers that actually create one for every microservice that they create. Adam's going to share a bit about what Affinity does in this space as well.

A few of the use cases:

  1. What I like to call "leave and layer" - so if you have that existing monolith, if you want to take advantage of microservices, if you have a brand new greenfield feature or functionality that you want to add to your application instead of just bolting it on to that existing monolith, think about creating that as a brand new microservice. Allows you to quickly launch that new capability in the microservice architecture.

  2. The second use case, and the one I'm going to dive a little deeper on, is to refactor incrementally. So you design, implement and test the new functionality and when it's ready to go, you redirect the old path for that API to the new microservice. This pattern is called "strangler fig."

The name "strangler fig" actually comes from nature. The strangler fig tree is indigenous to eastern Australia, actually northern Queensland. And what the plant does is it actually finds a host tree, grows up it, the tree starts to consume the resources directly from it. It's a form of a parasite and then it grows around the tree, takes the shape of the tree and eventually the tree inside dies off. And what's left is the new strangler fig tree that's in the exact shape of the old tree. So it's a great way to kind of reflect on how nature is representing a design pattern here.

And so it gradually creates this new system around the edges and allows it to grow over several years. And if you do have a big monolith and you're implementing strangler fig, strap in and expect that it is going to be something that you're doing over months, if not years.

And in some cases, the monolith is reduced down in scope so much, and the services that remain are those that don't change often or are not business critical, and that monolith can exist for some time on its own and not impact your agility and speed of innovation at all.

So I mentioned the multiple accounts, let's take a look at kind of what that looks like from an architecture perspective.

So when you start out, you'll have your existing monolith that exists within an AWS account that you see over there on the left. You define your new microservice. So you spin up a brand new AWS account and you may launch a few Lambda functions and that represents your microservice. And you'll go in and do all the requisite network configuration to route from the primary account into the secondary account. So API access is granted and then there's some security that you have to do on the IAM level at the, at the network layer. Uh but at the end of the day, you have now a microservice that's launched within a new account.

It doesn't sound too bad as you begin to grow on microservices and you create multiple accounts. What you end up having is a lot of what we call undifferentiated, heavy lifting. Your builders are spending time on infrastructure rather than applications. And that's what we don't want. The goal of moving to microservice is the goal of using AWS cloud native services. The goal of using managed services is to reduce the cognitive load and the operational overhead of your builders as much as possible.

To that end AWS Migration Hub, Refactor Spaces allows you to start refactoring your applications in days instead of months. It is a purpose-built AWS service for implementing the strangler fig pattern with your existing applications. It reduces the time to set up your infrastructure to begin that. refactoring. It allows you to shield your customers and consumers of those APIs from the changes that are occurring while you're refactoring. While you're implementing strangler fig, you then can reroute traffic from the old to the new across AWS accounts and you can do this in real time through the console with just a couple of clicks.

This solves setting up the infrastructure for refactoring and all of the different network configuration layers, security, network, security, et cetera that I mentioned on the previous slide. It also allows you to operate those applications at scale while you're refactoring. It does this by orchestrating a few things. The first is the AWS Transit Gateway and VPCs that are required for routing from the old to the new. It also configures an API Gateway with VPC link so that you can point to private URL endpoints within those microservice accounts and incrementally route your traffic from old to new.

So let's take a look at implementing strangler fig with AWS Migration Hub, Refactor Spaces. First step create a Refactor Spaces environment. Now, you can create a brand new AWS account or use an existing AWS account. It's up to you. This is going to provision optionally an AWS Transit Gateway, I say optionally because if you already have one configured, perhaps it's set up to connect back to a data center through Direct Connect and a few other AWS accounts, you can use an existing one as well.

You're then going to share that environment to your other AWS accounts. So you'll have one account that's going to house your existing monolith application. And this could be running on EC2 or potentially in containers. That's where your existing application is. You provision a new AWS account for your microservices that are going to land. As I said, either in a serverless approach with Lambda or potentially in containers through ECS or EKS.

Sharing the environment uses the AWS Resource Access Manager to share that environment to the other AWS accounts so that you can configure the routes. Once those microservices are available next, you actually create the the application within the AWS Migration Hub, Refactor Spaces console. This is going to provision the Network Load Balancer API Gateway and VPC link that allows that connectivity through your AWS environment to the target AWS accounts for your monolith as well as your additional microservices that you're connecting.

Now, remember all of these resources that you're seeing that are popping up on the screen. These are being orchestrated by Refactor Services. So this is removing that burden of having to do all of this manually.

Next, you'll create the services with URLs before containers or with AWS Lambda for a serverless approach. This will define the routes or excuse me, this will define the connectivity from the environment account through the Transit Gateway into the microservice account as well as the monolith account.

Next, you'll add routes. The first route that you're going to define is going to be what we call the default route. And this is going to point to your existing monolith application. At this step, you now have an API endpoint that originates in the Refactor Space environment account and points to the monolithic application and will behave exactly the same as going directly to your monolith application account.

So from the end consumer, be it customer or an API consumer if you're selling a third-party API for example, it will look exactly the same. And that's the point here. Now we set up all the infrastructure and the networking so that now we're ready to actually start to refactor around the edges, peel out those microservices, create them in their own separate accounts and then reroute. And that's the next step.

So we'll deploy this new microservice in this example that I have here. This is containers running on Fargate. We created an application load balancer or a/b and we can fully test this microservice in isolation without impacting the customer at all, once we're comfortable and confident in that deployed microservice and again, because this is small, the hope is that this would be done through automated testing of those well-defined APIs that became that contract.

Once you have that confidence that this is ready to go, you then configure through Refactor Spaces to add a route for that API endpoint to the microservice. Nothing has changed from the consumer side, from the customer side, all the paths remain exactly the same from the external viewpoint. Internally. All of that traffic for that specified API endpoint or path is going to route now to the new microservice that exists in the AWS account.

Some new features that I want to announce that have occurred since last re:Invent. Since last time, we talked the first, we now have a, a post launch integration with AWS Application Migration Service or MGN. So if you're migrating from a data center into AWS using MGN, you can now configure a post launch configuration that will automatically set up the AWS Migration Hub Refactor Spaces environment and create the default routes to the monolith using that server that was just migrated.

I mentioned you can use your existing network bridge. So it allows a lot more flexibility in using Refactor Spaces can now use Lambda aliases as the endpoints, giving you a lot tighter control of the versioning and environment settings of that application through aliases.

Automated DNS updates were in the past, you'd have to have kind of a sidecar Lambda function that would update IP addresses as they change. This now happens automatically through DNS updates based on the TTL of the DNS settings, expanded region availability Refactor Spaces is now today available in 17 different regions.

On the final one, you know, we'd like to say that we look around corners and invent on customers' behalf here at AWS. But 90% of what we build comes directly from what our customers tell us matter. And this last one is a great example and I bring it up because one of our early launch partners was Affinity, who you're going to hear from in just a minute here. And one of the early issues that they had was a desire to create these services based on path parameters. So utilizing path parameters within the API to control routing of the new microservice and so Affinity had some workarounds in the in the beginning, but we talked to them and, and saw that this was a key issue, not only for them but for any customers that are going to be consuming this service. And so we now have API path parameter support.

So I'm really excited to welcome Adam to the stage to hear the Affinity story because it's great to hear me talk, but it's even better to hear from customers that are getting, getting good results. So, Adam, he's back here. Thanks a hey, is my mic on. Yeah, we're good morning. Everybody.

I'd like you to imagine for a moment that you browse to a website that you've never been to before. So you've never been to this website, but it's created the most compelling customer experience as if it was designed just for you as if you are a market of one and you go to this website and you have a look around, it knows what your preferences are, it knows what you're looking for, it knows why you're there, but it does not know who you are. You are completely private. Your privacy is completely preserved and you look around this website and then you think this is interesting. I think I might want to create an account.

So you find the sign in button and you click it and then with your consents and data is exchanged and you're in notice, I didn't say you find the registration button. Notice I didn't say anything about a password. You simply press sign in and your account is automatically created and you didn't have to use a password. It was completely seamless.

Now imagine the user experience that you would be able to present to your customer if you were engaging in that kind of um set up. If you're a, a developer of that website, what would you need in order to make that work? Well, firstly, from uh sign up and sign in perspective, one single flow, you would need a standards based implementation that would work well with maybe some other uh log in and sign in and sign up mechanisms that you have in place. So you'd want it to be OIDC compliant.

You'd also not want to have any vendor lock in. So while you can implement um password authentication, using standards such as emerging standards such as FIDO2, you're very locked into the uh ecosystem in that case. So you'd want portability and you'd also want an easy way to transfer data programmatically and with your consumer, absolute consent at every step of the way, you would want that in place to create a compelling customer experience. But to keep that customer uh customer's data as private as possible.

So you'd need the relevant SDKs and documentation and support. This is what we're building at Affinity. I'm Adam Latter, I'm the CTO of Affinity. Affinity has been around for about three years. We're a startup venture based out of Singapore with a global presence and we are changing data ownership for good. We are using decentralized technologies such as verifiable credentials, verifiable presentations, decentralized web nodes, zero knowledge proofs and more to create compelling user experiences that preserve the privacy of your customers. At all times.

We've been operating for about three years using these dense decentralized technologies and solving real world problems. For example, during the pandemic, Affinity technology was used by millions of cross border travelers to uh verify their vaccination status. So real world solutions using decentralized tech, we are 100% on AWS and have been since day one. And AWS gives us the flexibility to abstract away the infrastructure to operate quickly, to fail fast, just like a good startup should.

Um and we chose AWS specifically because of the engineering excellence that our customers demand from us is what we get from AWS. So what's the problem with solving today? Well, as I stand here in front of you on stage, my physical self, I can be described by hundreds of thousands, if not more of data points, attributes, traits and preferences.

So attributes like my location, my height, uh preferences, like my favorite food, my favorite music and traits, like my eye color and my date of birth. Those things, I don't really change all of those data points allow you to understand me, to form a relationship with me and to uh and to, to get to know me really well. And these are the characteristics that form my identity.

Ok. But when I transfer into the digital realm, when I project my identity into the online space, and I visit these websites where they, there may be a couple of form fields that I have to fill out in order to set up an account. Maybe there's five or six or eight or 10, not hundreds of thousands. So how could that website or application ever really know my identity? And and who I am and what I want

So I'm leaving trail uh like these slithers of information about myself, about my identity when I visit all these websites. And therefore I can never have a compelling user experience. And what's more, what happens to that data that I leave behind as I'm going through all these websites and creating these accounts? What, what happens to that data?

Well, the reason why this is the case is because the internet does not have an identity layer. Identity was not built into the internet when it was created. Sure there is a security layer. Um and we've iterated over that as engineers over the years. But if you think about it, when you are using a web browser and you're connecting to a web server, it's actually not you as, as a person who's connecting, it's actually your machine uh to be specific, it's the user agent in your browser that's connecting to a server. And so there is no actual concept of identity being transferred there on an intrinsic layer.

And so our industry over the years has come up with a few solutions to this. First of all, decentralized. Um I beg your pardon, centralized uh user name, password. We all know what happens with that. Humans unfortunately like to write down passwords and reuse them across multiple sites. And then when there's a breach in one website, it affects many, many and we've seen many cases of this over the years.

So we've iterated, we moved to federated identity management. So I can now use a trusted third party to vend scope temporary credentials to uh allow access to, to websites on my behalf. But the problem with this is imagine if I'm using a social media provider in order to do, sign in to my, uh to the websites that i visit, I've really, it's very convenient, but I've really just exchanged my privacy for that convenience because that social media website now knows what i'm doing where i'm going and can track me. And a lot of these other websites that i may browse to because they've only got this slice of information about me and they, they want to learn more about me. So they'll use uh surveillance tactics to try and guess what i'm doing or to uh influence my behavior to keep me stuck on their website. But again, they don't really know who i am.

And so uh social media m uh providers may add additional information, but it's still out of my control. And it's not really my full self. Today is the day of decentralized technology. And we're moving away from the, the, the web two kind of centralized way of doing things and moving into the decentralized world. And this is the area that affinity is focused on decentralized identity.

There's quite a few pieces here and uh i don't have a lot of time so i can't go through everything. But as you can imagine, uh it's all about uh identity and storing your data and having the control of your data uh as a consumer uh fully on edge. And uh i in a way that allows you to extract value from the data.

So the three areas that i'd like to focus on very quickly, on the right hand side, the affinity vaults, that's your personal data store ad edge where all of your data is stored. And you can uh with your f full consent and control in a highly secure and privacy preserving way, uh share that data with third parties affinity login. That's the experience i um i mentioned before uh about password less seamless one click sign in and sign up in a way that is uh very uh new to the way the internet works and it's standards compliant. o a dc are compatible. So you can run it alongside your existing um authentication and authorization processes and affinity elements.

This is our internet scale api control plane uh that allows developers to build compelling identity, um digital identity and privacy preserving applications on top of our technology stack. And this is where i'm gonna spend the rest of the discussion today because this is the intersection between affinity and the uh refactor spaces journey.

As you can imagine, uh there's a lot there. Um we've been working on this for the last year. Uh but we've been in business for a couple of years and we've been, as i said, failing fast and experimenting and doing lots of things as a good start. up should. But we realized if we were going to build this new world of holistic identity and implement the affinity trust network, we had a few challenges that we needed to solve.

So first of all, we had multiple api end points. So over the time that we'd been experimenting, we had um different teams working on uh different approaches and, and different ways of authenticating and authorizing our, our customers. And so we ended up with many different ways of authorizing and many different api s that the developers would need to connect to. And that really just uh creates cognitive load for our customers, which wasn't great internally, our developer efficiency was also not great. There was a lot of cognitive load and you heard rp talk about that before cognitive load on developers means less um uh productivity. Um very uh it takes a long time to get started. There's a lot of load on the developers. And so starting a new project was actually measured in weeks and that's not very startupy, that's not very agile. So we knew that we needed to do something about that.

We had no uh standards across the organization on how we would define and deploy api s. And so that meant again, cognitive load and some, some friction points. And of course, with all the experimentation that we'd done over time, we had a fairly wide aws estate sprawl and we knew that we needed to get that under control because it was um uh burning uh cost that we, we actually didn't need to spend. So a little bit of work that we needed to do in order to get the house in order to build the affinity trust network.

So we set some goals. Firstly, we had to increase the um productivity from our developers. So we had to make sure that we were reducing the cognitive load on our developers and making it very easy for them to build very, very fast. And that meant uh introducing standards across the organization.

We had already used many aws uh service. Uh beg pardon uh accounts in our infrastructure build. And rather than consolidating that all down to a only a handful of accounts, we decided that we wanted to actually double down on that and create many, many more accounts that may seem counterintuitive when you're thinking, well, we want to reduce the cognitive load on our developers and be able to work quickly. But actually, it makes a lot of sense. And as you heard, ip o before, we wanted to make sure that the scope of impact of any um issues or um uh account limits and things like that were all compartmentalized into our various services rather than just having one big bucket where all of our aws resources were stored.

And we also wanted to create a single front door, a single api that our developers. Uh our customers could connect to, to reduce their cognitive load and simplify the way that they would consume affinity services and standardize authentication and authorization across the organization and across every api. So there was a lot of work to do there.

We were already a microservices design pattern. We, we followed that um uh very well. Um so we weren't monolithic but we did have some clean up that we needed to do. So. Uh in interesting um side note here that even though we were already in a microservices um environment, we still needed to introduce some kind of strangling mechanism to bring that uh under control. But we decided we were going to move to cervi as well. So we were running uh uh always on infrastructure using kubin needs and we decided we needed to move uh to cervi to make sure that the calling patterns um match the infrastructure costs that we were, we were recognizing.

So i designed tenants, we created a platform that we called genesis uh genesis for ob obvious reasons. It was a new start, start of the uh the new world of affinity uh which seems strange given that we had only been in business for three years. But uh yeah, we decided to call it genesis. Um and uh we, we also don't refer to the previous stack as um legacy we like to call it heritage. Um so we had a mission to move our heritage stack across to the new genesis platform. And we set an audacious goal. We wanted our developers to be able to get up and running with a brand new affinity service within five minutes.

Now, that may seem a little bit crazy five minutes. But we wanted to again reduce the cognitive load on our developers and be really agile and be able to deliver quickly because the business was demanding uh that from us, we wanted to make sure that we were tightly integrated with i am security. And so we um even though we were consolidating our authentication authorization down to uh one flow, we were wanted to make sure that we were taking advantage of the underlying um security fabric of aws and having security at all layer uh at all of the layers.

And we knew that the way that we were going to be able to do all of this in a short amount of time while we were building the affinity trust network was that we had to introduce a lot of automation and cogen. So we had to build tooling, as i mentioned before, the account isolation uh for aws uh accounts, we actually doubled down on using those and uh a p gave you some reasons before about why that's good. Um we also leveraged heavily uh aws organizations in order to simplify management of that. And we took an api first approach.

So quite often developers will write the code and then try and express that as an api, we took an opposite uh approach. We wanted to write the api using um open spec um the o open api spec and then express that as code. So our tooling that we created takes an open api spec yaml file and emits uh code ready to for the developer to start working. So api first approach.

So why did we choose refactor spaces? Again, we're already in a microservices environment. We're not monolithic, but we did still need to make changes to our environment while we were still in business. And while we were basically change uh uh rebuild the plane while we were flying it. And rather than seeing refactor spaces as something that we would use in order to make a move across to the new environment, we actually saw it as an ongoing delivery partner.

And the reason for that is because it has done a fantastic job of abstracting away from our, our tooling and from our developers, the underlying complexity of having to map api n points across aws accounts and into lambda functions or, or fargate containers. It makes all of that just go away. It gives us the full control over the paths that we are um uh setting up in our uh in our api end point and we don't have to worry about which aws account or, or which compute target that we're that we're targeting.

So it makes all of that complexity go away. It's reduced the cognitive load on our development team, our tooling that sits on top of refactor spaces makes all of that just go away. The developers just focus on the open api spec and uh and run the tooling and everything just works uh for them.

Adam: And also importantly, we're a start up, but we know that we're going to get things wrong. So every time we make a decision, we're introducing cognitive uh we're sorry, introducing tech debt. And so we wanted to make sure that we didn't make the wrong decision that we couldn't actually unwind.

And so uh making a decision to move some workloads across to lambda in the future, we may actually need to move those to an always on infrastructure because of the calling patterns and refactor spaces gives us the flexibility to make that decision at any time. Later on. We're not locked into any decision. We can easily just change the, the, the mapping from the api to the compute layer.

Um and so that gives us ultimate flexibility. So here's a quick look at the architecture that we have. So on the left hand side, our engineers start with a, a new open api spec or a modification to the uh open api spec and they will check that code into our pipeline. So everything is fully automated. Our developers are, are um uh just responsible for checking the code into the repo and then we run various uh processes and, and checks across that open api spec uh yaml file.

And of course, obviously the supporting code as well, linting anti tamper security checks, those kind of things and that's all done automatically. Um at that point because we're using the uh we, we're using cd k to express our infrastructure. Um we will be obviously deploying lambda functions and making changes to, to lambda functions and things like that. But then also to make changes to routes in refactor spaces.

And those changes are then expressed in the api gateway, which is then where our external developers can make the connection and actually consume these services. And you'll notice on the right hand side at the bottom there, the merged api spec. So as rp mentioned before, when we first started looking at refactor spaces, we realized that path parameters were an important um aspect that we needed to include.

And because refactor spaces didn't uh natively support that we built our own mechanisms in order to um keep state on the uh the uh as is and uh to be able to uh convert from the azure state to the two v state of the uh api spec and actually modify the calls uh directly into api gateway to express path parameters and other things that, that ref factor spaces wasn't able to handle for us.

But then of course, working with the ref factor spaces team and giving them our feedback on the things that we would need in order to take this to production. And they've done a great job of um providing um many other uh features as well. But including that one, we've actually open sourced that tool. It's called the genesis reconciler and it's on github if you want to check it out.

Um and it just shows you the kind of flexibility that you have when using aws because you can go under the covers and get access to the underlying resources. Even though this higher level abstraction layer or the or orchestration layer, that re refactor spaces is we're not locked into just that where we can't get access to the resources under underneath.

So let's talk about some results. We've seen a an overall reduction in the cognitive load of our developers of 50%. And that of course, um reduces the effort that they have to go through in order to create or modify an existing api and also therefore increases our agility. 50% is significant.

And we've standardized all of our tooling across the organization. So our developers have much less cognitive load. When they're starting a brand new project, it's all very standardized. Our lead time for a new service used to be one or more weeks in order to just get started from an engineering perspective, let alone all the other processes that go around that we're now down to 10 minutes.

Uh now even just saying that out loud seems a bit crazy, but it is is actually that quick by standardizing and automating as much as we can. Uh we've got that down to a significantly low level. So the time for making a change to an api is measured in minutes, the developer simply needs to express the outcome as code, check the code in the pipeline runs and everything else is done for us.

As i mentioned, one of the goals that we had was to be able to uh have a, a brand new affinity service up and running in five minutes. This is really important like why, why is five minutes such um uh it's a, it's a pretty audacious goal, but why was that important for us to hit?

Well, if you think about it and i'm sure there's many developers in the room here. If the business comes to you and says i urgently need this feature, ah it's gotta be done by this time because it's business critical and you as an engineer are thinking to yourself, you know what i could just take this um intent and put it into an existing api because it'll just be easier and i know there's a business business criticality here.

So i need to ship it so i can't take a long time in order to implement this. So i'm just gonna put this feature in that api what you've actually just done there is you've blurred the lines um of the domain boundaries. So you end up over time with your domain boundaries um atrophy because you'll end up with features in an api that don't really belong in that particular domain.

And over time, that's just introducing tech jet tech debt and confusion and cognitive load. Whereas if you can say to a developer, if it makes sense for this to be a brand new api and it takes you only five minutes to get hello world up and running within five minutes of running the tool. You are ready to start coding now. Sure. That doesn't mean it's deployed in production. Of course, it takes a little bit longer.

But you as a developer can start coding the outcome within five minutes. Then there's actually no excuse for you to not do it the proper way. So we're actually investing in a, a better api uh and stronger uh api boundaries, uh sorry, domain boundaries in the future, the time to actually get the deployment uh out into the um the first environment, which is usually our dev environment um is around 10 minutes.

And again, that's fully automated. So the developer doesn't have to do anything. Uh run the tool, check the code in and off it goes. And overall, we've seen a reduction of 25% in our infrastructure costs. Now, that's actually a little bit conservative. It's, it's way more than that. Um the reason we say 25% is because uh while refactor spaces, we can definitely say that refactor spaces contributed to that 25%.

It's actually from a broader perspective allowed us to reduce cost across the board because we're now thinking in a different way, we're actually able to clean things up much faster. And so the, the overall reduction in spend and the overall reduction in in infrastructure that we had to spin up has reduced significantly.

So we now have 30 plus services running through refactor spaces. Um we see a world in the future with everything that we need to build with the affinity trust network. We're gonna, we, we'll probably have 300 services with any luck. Um and so we, we've built this to scale. So we have 30 services on refactor spaces today. That's just the starting point.

Our deployment frequency wasn't quite as bad as once a quarter as rp mentioned before. Luckily. Um but uh i it, it wasn't great. We are now down to 10 deploys or more per day. It's common for us to deploy. It is no longer a special case, we just deploy, which is fantastic and that puts us in the realm of elite performers according to doom metris, uh which is also something to celebrate as well.

And of course, our biggest outcome was the fact that instead of having 20 api s with different authentication, authorization, uh ways to authenticate and, and authorize, we're now down to one single front door api so our developer, customers just need to bind to one api and that api is the front door for everything in affinity.

And uh we have a standard authorization and authentication layer. And so our security teams know that all the calls coming in through that front door are going through that same endorsed flow. So overall, our outcomes are much better develop productivity reduction in the cognitive load on our development team.

Our deployment is fully automated with the right belts and suspenders, the guardrails in order to prevent the developers from doing something that they shouldn't. Um and exposing the organization to risk. A lot of that undifferentiated, heavy lifting has been taken away and we are much more agile without reduction, uh sorry, without impacting our customers.

When we make a change, all of this can happen multiple times a day without impacting our the developers who are, who are binding to our api s at affinity. We're building the next version of the internet. We're changing data ownership for good. And our vision is a world where everyone can securely manage and control access to their data in a privacy preserving way and also to extract value from their data in a way that you just can't do on the internet today.

And i'm really excited uh to be working with aws and refactor spaces in order to bring that uh that vision to a reality. And uh i'll hand back to you now. Thanks ip.

RP: Thank you very much. Thank you, adam, one of the best things about working at aws is working closely with customers like affinity and just hearing the results in the story of their success when they partner with us. So, so thank you for those stories and sharing those great results.

I wanna leave you with a few resources. So uh qr codes up cameras up. The first is the landing page for the aws migration hub brief actor spaces where there's great documentation and details about the service. Next is a great write up on multi-account strategy. We touched upon it a little bit today. We talked about some of the advantages this destination is going to go into a lot greater detail about exactly what the multi-account strategy will give to you as well as the tools to automate a lot of that the affinity site where we can find out more about the products and services that they're producing.

Next, we have on github, a events storming workshop. So you heard us talk about domain driven design. You heard us talk about microservices that have a clear defined business domain boundary. That is an exercise that you do with the business folks. It's not just the builders. And so event storming is a a mechanism, actually a fun event that you do with the business to actually define a view of your application as seen from the business and your customers.

And then finally, some details on the strangler fig pattern along with architecture diagrams and some sample code that show implementation of the pattern. That's all we have for today. I do want to ask if you can all go ahead and open up the mobile app and fill out the survey a little re invent tip here because they do give away some additional swag often depending on the surveys that you complete.

So you're in an early session here, make sure you open it up, get that survey completed and ready for that additional additional swag. It's also the only way that they'll actually pay me. So if you can fill that out, i'd love, love to have my family.

Thanks again for your time. Uh please enjoy the, uh, the rest and re invent mrp. That's adam take care.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值