Hey everybody and welcome to net 326. This is uh Amazon BBC Lattice Architecture Patterns and Best Practices. My name is Justin Davies and I'm a product manager here at Amazon, uh focusing on all things, application layer networking. Uh it's in the EC2 networking organization.
So um let me get to know the audience a little bit here just before we get started. I like to do this every time because as you'll see, as we kind of go through this presentation, it has a lot to do with roles in meeting customers where they're at.
Uh so maybe just with a show of hands um who in this room considers themselves kind of more of an admin. This could be security admins, network admins, cloud admins. Ok. So a good portion of you. All right.
What about uh so to the people watching the video later, good portion. What about developers? Uh this doesn't have to be the person writing the code but necessarily uh you know, owning a service, perhaps configuring that service. And I would say it's probably about 50 50%. So that is very good to see.
Um what about both? Who is kind of the wearing all hats. That's good. That's good too. So, a good portion of you. All right. Well, um, this is exactly why we wanted to build VPC Lotus because while a smaller company or even a larger company might have people that wear multiple hats, there's a lot of people that might be covering the admin and the developer roles.
Um, typically, what we find is that you usually fall into one or the other and sometimes it doesn't have to be, but sometimes there's some conflicting priorities um that kind of can get in the way. Uh so, you know, these are the kind of things i like to say, i want to empower my developers, this is what the admin says, but doing it safely, right? A lot of the time i've heard the term, you know, they're the fun police is what a lot of developers call the admins.
Um but it's not true, right? It's like they don't want to end up on the news, their whole charter is to make sure that you can run and operate and move quickly but not end up on hacker news right now. The developers, uh you know, even if you are this full stack engineer, um they may not be the ccie or the the network expert. And in all reality, it's probably their least favorite thing to do. Uh they'd rather not be focusing on uh trace route to try to figure out and troubleshoot their problems. So how do we make this easier for folks?
Um not just with configuration but also with troubleshooting. That's kind of where a lot of this stuff came from after talking to customers and seeing that this persona problem exists.
Um so earlier this year, actually, last year, we announced it as a preview. But earlier this year, we took uh a new product called Amazon BBC Lattice to G A uh that was back in March. And uh the whole purpose if you don't know, uh we will spend a little bit of time going over it. Uh the whole point is to simplify application layer networking and to bridge the gap between admins and developers. All right, how do we make this a little bit easier? How do we make them become friends again? By the end of this, i want all of you people talk with each other. All right.
So with VPC Lattice, the whole mission was to make developers be able to build their applications faster, simplify networking and connectivity so that they weren't dealing with the underlying network connectivity pieces. They wanted, we wanted VPC's to not really be a thing, right? We wanted applications to be able to live wherever made sense to them. Uh different accounts, different VPC s and the developer really shouldn't have to think about any of this kind of stuff, right? From a security perspective.
Um there's a big push to get zero trust architecture patterns and to adopt kind of application layer security, not just network layer security. We wanted to make it really easy for customers to get the strongest security uh possible uh with the application of authentication and authorization. And then from a visibility perspective to be able to see if that security posture is actually doing what you're thinking it's doing.
Um as well as if you're, you know, uh developer calls you at two in the morning and says it's a network problem. You have the same visibility that they do to actually determine what is this ip address. Because in all reality, like what is an ip address, it's not an ip address, it's probably a service behind that. And so it gives you that kind of bridge to under to understand that.
Ok. So what are we gonna talk about today? We've got an overview of VPC Lotus for those of you living under a rock. Uh we'll talk about kind of what it is, some of the key core components.
Um then we'll also talk about some new features and capabilities that we've launched since G A. Uh and then i'm also gonna cover uh some of the common questions and answers, you know, we launched in march and it's been really interesting talking to uh customers. Uh a lot of the time there is some common questions and i'd like to address some of those today uh to help, help out some folks that uh you probably all have the same kind of questions.
Um we'll also talk about some top architecture patterns um and some popular things for you to think about um to help address some of the problems that you're probably seeing with your customers today or your own applications today. Uh and then we'll follow it with uh you know how to get started, how to contribute um and uh stay in touch with us. All right.
So VPC Lattice has four key components um that we're introducing, this is a VPC feature, uh services, service networks, off policies and a service directory. These are four new things that we've introduced.
Um you don't have to use them by the way, uh you can use it if you want to, you can enable this. Um and we're going to cover each and every single component.
Ok. So first and foremost, it's really important to know what a service is, right? Because everybody's got their own definition, right? There's kubernetes services out there, there's a lambda functional service like what is a service? So let me talk about this for two seconds. So you make sure that you're on the same page.
Um in VPC Lattice, a service is not a physical thing. It's just a logical abstraction. Um you could think of it like a DNS endpoint. So every service is mapped to a DNS endpoint. In this example, I've got app1.example.com, ok? That is the service and a service in VPC Lattice is just a logical front end and it consists of listeners rules and target groups.
Now, if you've been around Amazon for a while, you probably know this is very similar to the Elastic Load Balancing family listeners rules and targets. The listener is something like a protocol. Is it HTTP is it HTTPS what's the protocol version? Is it gRPC? Um you know, things like that HTTP1, HTTP2 left the listener, the rules can be something super simple like if you see the connection default forward to target group, right? That's just the catch all for all the 443 or all of 480 or it can be something way more advanced, right? It doesn't have to be, but you can make it more advanced.
And so i've copied a couple of examples down in this slide. Um you know, the first one's kind of showing, you know, maybe i want to send a path like path one for a certain api to target group, one that's being fed by a instance autos scale group, an ec2 autos group.
Um but if i get a header that says user agent equals justin, i want to send that over to uh my target group two that has my kubernetes services behind it. Um if i get um get request, you know, send that to fargate and uh maybe a catch all the default action if it didn't meet any of those send it to a lambda function, right?
And so it's really powerful because you can now manipulate these things um in any way that you see fit and mix and match the compute platforms for whatever your developers want to use. Uh something i come across all the time is the fact that uh i talk to a developer and they'll say i really want to use lambda, but my security team hasn't really wrapped their head around how they can secure this in an appropriate way. And so i'm being forced to do xy and z and the admins uh i come from an admin background. So i'm like i get it. Yeah, like i, if i haven't approved of that, i don't want you using that either. But this gives a little bit more control where i can have this abstraction in front where i can enforce the same security controls regardless of whatever is behind it. And it's super super powerful to allow the developers to move fast and choose the platform that makes sense, the same philosophy of like letting your developers use this whatever coding language they want to use.
Um here's another one. We also support uh the ability to weight targets. So target groups can exist across VPCs and you can weight targets like maybe uh for slash path one, send 90% of the traffic to target group one and 10% of the traffic to a lambda. function. I'm not showing it in this diagram, but you can have the targets be in different VPCs, which is really cool.
Um for some customers, what they've kind of said to us is like, hey, actually this helps us with kubernetes upgrades, right? Because now i can actually life cycle. My VPCs have my old one up and running, keep it going, spin up a new VPC, get the new cluster up and running and get my services going. It's like the staging environment fail away 10% once i've qualified it and i feel comfortable with it. I can delete the old one. And what does this do? Because i have an abstraction in front of it. A k a the service, i don't need to do anything to my client, my client, wherever that client is, it could be another service, it could be a user. I don't care, they didn't even know you did it right? Like they don't even know you changed something. And so it's a very, very powerful thing to be able to separate those two things. It's a good abstraction layer
Service network. Ok. So this is another totally made up thing. It's a logical abstraction. Um if you're familiar with the concept and a lot of you probably are of um like active directory and different IAMs uh have this concept of users and groups.
Um you can do a whole bunch of things with individual users like put permissions on them and all that kind of stuff. But if you have a whole bunch of users, it makes sense to kind of like make a group and put all the common policies that affect all those users. And then you could put like the really specific policies on the individual user.
It's the same thing here. Ok? So a service network is really just a grouping mechanism that allows you to do a couple of things. And we're going to give some examples of what those things are, but really think of it as a grouping mechanism. Ok? You can put services inside it and you can associate that service network to VPC. That's the thing that enables connectivity, right? Just because a service is in a service network, it doesn't mean those things can talk to each other. It's a provider and consumer model. And so you put the services in it, you make this thing everything you want it to be and then you associate it with the VPCs, ok?
Off policies. This is what i'm talking about with the users and group story. Um you can put off policies which is basically uh IAM resource policy on service networks and you can put them on services. And here's the big kicker. So an IAM resource policy, you're probably familiar with it, you could put it on a bucket and the resource is usually like an AWS thing, right? Like who's allowed to do something to this resource.
Here's the difference that resource can now be your service, your VPC Lattice service. Ok. So it's a very, very powerful way to use IAM authentication. It gives you all the benefits like hands off secrets management, ability to attach instance uh profiles and task roles, you know, hands off credential management um for your own service to service authentication and then write super context rich authorization rules of what can talk to what and these can be applied on the service network or the service.
So what does this look like if you can apply them in two places? Is that super confusing, right? No, this is basically the least privileged model. Ok. So if i write a policy that essentially says allow traffic on the service network and then i write another policy on the service that says allow you guessed it, you're allowed. But if i did something a little bit different, and i said allow traffic at the service network don't allow traffic on the service. Well, you guessed it, it's not allowed. Ok. So it gives you this kind of flexibility to really control things.
Um here's an example of basically a service network policy, service, network policies and service policies. They're the exact same thing. You can put fine grain or coarse grain policies. On either side, you can choose not to even enable service policies if you didn't want to use authentication and authorization, but i encourage it and what i typically tell her customers is don't put something super complex on your service network. This is your guardrail, this is your defense in depth, right? This should be human readable that you can quickly understand
Like I know that every single request that came in here was at least authenticated from my org ID. That's what this policy is that I'm showing here. So at least it came from me. That's what I'm basically saying, right? Which is not where we want you to go. It's not like the end goal, but this is your guard rail, this is your protection, your defense and depth.
So now on the service side, and it still might be an admin configuring the service policy. It's just completely up to you and your organization. Some people give their developer or service owner more control of their own services, others they separate that duty, whatever makes sense to you. But in this model, I say this is where you do the fine grained stuff and you can get really, really fine grained.
So in this example, I'm actually defining individual principals um to call, right? And this supports the typical IAM stuff. So you can actually do principal tags. So like if you wanted to group and say only principals that were tagged, proud or PCI, you could do stuff like that here. And then this one where all the yellow text is, I'm basically saying um only for my service, which is the widget service. If the query string parameter of the request, you know, the actual HTTP headers uh equals color blue and uh it's get requests, right? So I'm talking very, very, very context specific rich policy.
And the reason I'm saying do that on the service level is because if you did that on the service network, it would get, it wouldn't be human readable, right? Like it would be really, really tough. And so you want that kind of separation there, ok?
Service Directory. Um this is really powerful actually, even though it's the most simple, simple feature we have. Um this is just a list of all of the services that you created in your account. It's an account level view of all the services you created and all the services that were shared with you. Ok? So admins love it because it's like I can log into that account and see here's all the things that you technically have access to and the developer loves it because they can go in copy the url of the service they're trying to get and hit the ground running, right? So very, very powerful, even though it's the simplest, it's nothing super fancy, right? Go to the console. It's it's a great thing to have there.
Alright. So this is what the roles in my mind make sense to me. You can change these if you want, give different IAM permissions. But typically what I see when I'm talking to customers. Uh the admin is the person that creates the service network defines the access controls with the IAM policies. Uh and is usually responsible for not only spinning up VPCs but associating the service network with VPCs.
The developer. On the other hand, is kind of like the person that would traditionally be setting up the application load balancer and everything behind it, right? And so that could be somebody different in your role. Um but they create the service, uh they define traffic management. It could be as simple as just default forward to me and I'll figure it out or it can be that more advanced kind of blue green type canary style deployment stuff.
Um and they can or cannot associate services to service networks. Some companies say, whoa whoa, no way I'm the only one that's going to be able to do that. Sorry. Um so it's just up to you, you can, you can kind of figure that out, but this seems to make sense for most of the customers. I'm talking to.
Alright. So let's walk through an example here uh with a bunch of my uh vegetables and fruit salad uh company. Um we're gonna walk through what it looks like to create a service, create service networks and associate service networks to VPCs. This is mostly going to just be an architecture talk. I'm not going to really give a demo or anything like that? I hope that's ok. And I'm happy to do that later if anybody wants to grab me in the hallway.
Um, here we've got, you know, four VPCs a bunch of VPC all over the place. Some of these are shared services. Uh, some of them are, um, uh, services that only can talk to each other within their own, their own application. Uh, and other ones are, uh, clients only. And what I mean by that is that when I talk to customers a lot of the time they have applications that have dependencies, a k a meaning that they're a service provider and a bunch of applications are calling them, but they also have applications that are like batch jobs that nobody is ever going to call them. They're just gonna call others, right? And so that's what I call clients, right? Those are clients um and some applications can be both. Ok?
So let's walk through an example of where we kind of see each of these in action if I go ahead and I create the service. Um you'll see that I created services for blueberry kale tomato carrot, cucumber, um VPC three. I didn't really do anything in because they're just consuming things. Nobody needs to access them. So why would I even create a service for that? Ok. I don't need to, they can still participate in VPC Lattice, but I don't need to create a service for them then I got cherry, apple and pineapple. Ok.
So this is the fruits and vegetables. So to start out, I want to create a network where all my fruits can talk to each other. Um and so I primarily just want to share blueberry, cherry and tomato. So I create a service network. I call it fruits. And I go ahead and I share fruits a k a associate fruits to my service. Sorry to my VPCs four and three at this point.
Um as long as I don't have some crazy IAM policy denying things or I don't have security groups or network ACLs blocking things getting there. Um raspberry banana, the two batch jobs, uh cherry apple pineapple, they can all talk to each other, right? They can all access each other, they can automatically discover each other, connect to each other and there's no load balancers here, there's no load balancers, there's no network connectivity VPC Lattice is both of those, right? It's handling all of that functionality and this just works, right?
So that's the three steps. But now what do I do about vegetables? Right? So vegetables needs to also connect to things like the shared service. Kale or sorry, kale is uh just one of the vegetables that carrot and cucumber need to access. But what do I do with tomato? Because sometimes people tell me tomatoes are fruit. Sometimes people tell me tomatoes and vegetable. And so I don't know that's my shared service, right? It needs to, it needs to be accessed by both of them just depending who you're talking to. I think it's really a fruit. But um so this way this is basically to show you that you can put services in multiple service networks, right? And so it gives you this flexibility of kind of just like you define whatever your connectivity pattern is for that VPC and it will figure out that connectivity. Ok.
So you can attach service networks to multiple VPCs, you can attach services to multiple service networks and it gives you this higher critical approach to kind of manipulate it as you see fit. Ok.
So here's kind of like a overall kind of view of how to think about these things. Um by default out of the box, your services and your service network, um even your VPCs, if you really cut off all the boundaries, uh it's designed to be secured by default, you know, services only have access to the services in their service network. This is really important because not even talking about specific like security group filtering network ACL filtering network level controls, application level authentication, authorization, you get very strong protection just by scoping your network correctly, right? Just by scoping what that VPC has access to in the first place, right? And so that's a very important part in the same design.
You can enforce encryption, you can say only HTTPS traffic is allowed. You're not allowed to do HTTP yeah, that's a very strong one. That can be an easy button to say, well, check mark. At least I know everything is HTTP and encrypted. You can enforce authentication and authorization on every single request between your services and anything that leaves your VPC or comes into your VPC.
Um and again, I want to emphasize like start with simplified policy just by scoping your network appropriately, right? How we did the fruits and vegetables, right? I needed something new. So instead of making my policy crazy, I just spun up a new service network and then services can be associated with many service networks as we talked about.
Alright. So now that we've got some of the baseline type stuff together, uh these are the core things we launched with, right? These are the core features and kind of functionalities capabilities, whatever the word is uh that VPC Lattice supports, right?
So uh we don't care if your services are across uh VPCs or accounts we're integrated with resource access manager. And so you can share uh services, individual services, you can also share service networks. And so it gives you this flexibility of, do I want to share one z two z things and let my consumer put it in their own service network or do I want to share a collection of services and just let them associate to their VPCs?
Um the other part of the connectivity piece is this is doing uh network address translation in this example. And I'm gonna go through a packet flow in a couple of minutes here. Um everything talks to VPC Lattice and something called a link local IP address range. Um and when you receive a packet, it came from Lattice, right? And that's the kind of protection that is made there. And so we also handle with this process all of your network address translation. If literally everything was the same IP address, that's ok because you're always talking to Lattice and we do that translation as necessary. If one side is IPv4 one side is IPv6, you're going through a transition where not both of your sides can do it at the same time, totally fine.
The protocol versions we support are HTTP HTTPs gRPC. Um so mostly HTTP service, this is an application layer proxy um for TCP today, uh you would combine this with something like private link, right? So you do your private link stuff for your TCP or transit gateway, that stuff still works with it. Um but this is really for application layer stuff today, request routing, typical stuff that we talked about before path header method way to target groups, things you would expect from an application layer proxy.
And then from a security perspective, um you can put security group referencing on your VPC associations, which is really pretty powerful because you can kind of like pick and choose uh with security group referencing what instances or what resources in your VPC can talk to VPC Lattice in the first place. It's a very, very strong kind of network layer control that isn't relying on IP addresses, right? You're doing security group referencing. But then again, also walk up the stack, we have that IAM authentication on the service and the service network. So you get that full defense in depth strategy, the target types we support right now instances lambda functions ABS IP addresses are Kubernetes integration.
We have a controller for Kubernetes that goes and provisions this all for you. It's called the AWS Gateway API controller. It's open source that shows up in Lattice as an IP target. Um so just so you, so you know, so that's why it's kind of calling it out there. Um sports auto scaling groups. Uh and then for ECS today, we still require an ALB or NLB to fit in front of your ECS tasks. Ok.
Alright. So what is new if you uh were around for our re invent talk last year? You're probably caring. Ok. What happened since then? So there's a couple of things we've really been trying to move quickly on, on a lot of these things. We've introduced a couple of new regions. I won't bore you with reading them. Um we do have plans for a lot more regions next year. So stay tuned. Uh if your region is not there, um we integrated with a resource access manager that was there from GA, but we added a new feature called customer manage permissions. And I'm not going to talk too much about this one right now because I'm gonna have a slide that walks through it.
Um shared VPC support, you can have services in a shared subnet, you can have clients in a shared subnet, there's no funkiness there, it works. So that's a really popular one for a lot of uh customers where the admin team owns the VPC and they want to divvy out subnets for their consumers VPC level compliance. This is a big one. I mentioned that we're a VPC feature. And so all of the VPC level compliances that are show up on our compliance, web pages um cover VPC Lattice.
The one caveat that I would pull up though is there's a couple of compliance certifications that are not covered with Lattice things like FedRAMP. And these are the services that actually certify your API or SDK individually instead of the feature. And so wherever you see individual SDKs listed, that's typically one that's probably not covered. But all the major ones that you are typically used to are covered.
New event structure for Lambda and identity header propagation is basically kind of the same feature. We're going to go into that in just a second here, just one's for Lambda and the other one's for all the other platforms. So we've got support for both and then the Gateway controller is now GA that happened a couple of weeks ago.
All right. So identity, the head of propagation. This is probably one of my favorite features that we've launched. Uh super powerful. It's also probably the most thumbs up I've ever gotten on a LinkedIn post uh when we launched this feature.
So this actually on every single request, we add a header that includes a whole bunch of metadata for your back ends for your targets. That includes things like who call who the caller was. If it was authenticated, if it was authenticated, what was the principle? It includes stuff like what the source VPC was, what the account was like a ton of information. And I'm going to slide right after this. You'll see all the different ones, but it also works with IAM roles anywhere.
So this is what the kind of header looks like. This is an actual snapshot from an HTTP server. Uh and this is what is covered here. So uh there's some that are covered in just their standard X Amazon anywhere, identity. Uh if you are using IAM roles anywhere, there's some really nifty things you can derive out of this.
Uh so we had some customers that were asking us to be able to use their spiffy roles um as in policy. So be able to write an authorization policy using a spiffy ID instead of just a IAM principle. So this gives you that right? Like you can actually identify that with some of these things like the X 509 SAN name and URI, it'll give you that common name, lambda event structure again, pretty much the same feature.
Um it's just it'll deliver it to lambda as an event structure, right? So you get all of this rich information which is super powerful, right? So like why is this thing important in the first place? Ok. Well, not only do I now have all of the information about the entire identity and the environment. The request came from directly in the developers HTTP logs two in the morning, they're typing away, they're looking at their logs. Is it my service? They see the whole path of everything that it came from right there without looking at another tool, they're looking at their HTTP logs super powerful, great for troubleshooting, but it also can be used for additional custom authorization on the back end. It can also be used for personalization. Maybe you want to treat things differently, right? When it gets there based on this information, it's a very, very powerful feature um that I definitely think you should take a look at if you haven't uh didn't see the initial launch of that customer manage permissions.
Uh this is a really powerful one. I i'll i'll walk through it here. But this sharing of resources, you share services and service networks. Um when you share a resource, there are actions that these people can do with these resources. So like a service network, if i shared a service network, um what can the person i shared it with do with it? Can they put services in it? Can they associate it to VBCs? Can they do both? These are very important questions. And what this allowed you to do is without getting into the destinations IAM configuration, i can attach a policy as i share it saying you can only associate this to VPCs, you're not allowed to put services in it, right? And so that basically looks like this, right? So condition string equals source VPC. And you're actually describing what that person is allowed to do without having to touch their IAM permissions on their side. It's a very powerful feature. It simplifies things quite a lot. Ok?
Questions and answers. Uh these are just the typical ones. I picked a handful of questions that we get. I talk to customers every single day. Um and a lot of the time it's uh a lot of common things, a lot of things that you think are unique, your people next to you probably have the same problems.
So how does traffic flow through Amazon VPC Lattice? Great question. Uh in this one, i kind of get tired drawing it up so i want to show all of you. So it's on recording.
All right. So VPC Lattice, we tried to make things as generic as possible so that it could be used with as many systems as possible without a big forklift. We're not doing any kind of like weird custom service discovery or anything like that. I know it looks like it's magic, but we tried to make things as kind of basic as possible so that it was easy to troubleshoot.
Um and so when you create a service in VPC Latus, this assumes that you've already got your service network and your service set up. Um we generate a DNS name for it. Uh you know, the region, the service name, a big long hash uh dot on dot AWS, you know, you can also define your own custom domain names. This is what most people do. They don't usually use our big long ugly name.
Um and then you can put aliases to it. So in this example, I've got an alias for inventory dot uh sorry billing dot my app.com, the inventory service is calling it. So just standard DNS request goes out to the VPC resolver says where's billing by my app.com?
Um it will come back with how you have Route 53 resolver configured the VPC resolver. And it might say, you know, billing dot my app.com is actually an alias record to the VPC lattice service and the real IP is 1692541 71.25. As you'll see, that's not the actual IP of the destination service. It's the same IP as the client.
Um but what that does is it's a link local address that says cool. I know about this. It's a VPC lattice service. You're enabled for VPC lattice since you're in the broad service network. And I'm just going to send the traffic directly to my ENI, this service is directly attached to me. So it basically goes out the ENI and the VPC, since VPC lattice is built into the VPC substrate knows exactly what to do with this now.
So it takes that packet into the proxy for further processing. So now traffic is automatically sent um a note on this is that we do honor security groups on purpose, right? We do not want to bypass something. So if your security group is blocking the VPC lattice link local range, your traffic won't work. So usually if something happens, it's probably the first place you should check, right is my security group allowing it.
Um so in this case, as long as your security groups are allowing it as long as your network ACLs are allowing it good to go. You don't need to worry about the VPC lattice IP addresses. We have a managed prefix list um that will actually be used for this.
So after the traffic gets to the VPC lattice proxy. We validate traffic. We do the HTTPS transaction. We identify the headers. If authentication is turned on, we run it through authorization policy, both the service and the service network. Um these are not two different hops. It's done at one time. We've merged them and uh do this at one time. So it's not some big latency thing where it's having to do it twice.
Um this is a typical policy that could look, you know, first one, is the request authenticated. Ok, cool. The request was authenticated from my org ID. Um great place to start again. This is the best way you could do it. You don't have to have authentication turned on. You don't have to have to have the client do authentication to use off policies. You can still do network level controls and off policies.
Um it's just, you know, i encourage you to and then at the service network obviously doing the fine grain policy. So we've gotten the service network, we've routed the packet there. We've done it through off policy. Now. We end up going to the traffic management section.
So after we've ran it through, we've did the DNS request, we routed the packet of VPC lattice, we ran it through the off policies. Um it's been approved, it's allowed and so now it's able to do the traffic management stuff and this is where it looks at the SRT rules and targets where your developer or the service owner defined how to actually route these packets. Ok?
And then at that point, the traffic is automatically routed even across VPCs and accounts to the destination. It will arrive from another 169254 address, which is the lattice range. Ok. Your security group will have to allow that traffic in otherwise it will not be allowed.
And so this is a picture right here of the managed uh managed prefix list. And so don't use the IPs, don't worry about the IPs use the managed prefix list and there's one for IPv4 and one for IPv6. So just use both of those, put them on everything and you're good to go. Ok?
Not gonna read this. I don't want you to read this. If you wanted a recap of what i just said, just for later reference, you could take a picture of this, but don't worry about reading it.
So I'll pause here for two seconds until i see people put their cameras down. Ok.
All right. How does pricing work? Everybody wants to know this. What is a service? Am i paying for every single instance of my service, all this kind of stuff?
So there's three dimensions of pricing. Um typically most customers only are really looking at two of them. Um it's, there's an hourly charge per service. There is a data processing charge. How many gigabytes are being processed and then there is a request per hour.
The request per hour is the one that usually customers don't really care about because there's a free tier that covers 8090 sometimes 100% of your services. Um and so that's the 1st 300,000 per hour are free. Um if you go beyond that, it's 10 cents per million requests. So if you got some super hyperscale service, uh you might fall into that tier, but then your 80th percentile or your long tail wouldn't hit that on the hourly charge per service.
Um in IED, you can think of this as like $18 per month per service. And again, keep in mind this is handling your network level connectivity, your application, load balancing, take functionality, all that kind of stuff and it's a DNS name. So if you have multiple paths behind a certain DNS name, that's still one service, right? It's not multiple services. So there's actually a very flexible kind of way of thinking about that.
There's no attachment or end point cost. Uh so this is slightly different than other AWS services where you pay for the consumer. Like if you have a 10,000 VPCs consuming this thing, um you're not paying each time you consume it, you're just paying for the service. The consumer has no association charge. The VPC association uh is free.
Um again, it can, a service can consist of thousands of targets, you're not paying for the targets, you're not paying for running a load balancer next to every single target. It's just the front end, that logical abstraction.
Um yeah, on the data processing something to touch on here. Uh this is all encompassing, right? It is the load balancer, it is the network connectivity uh and it is covering all your data transfer costs and everything else like that.
There is no data transfer cost. If you're talking to things across AZs or anything like that, it's just all encompassing in this uh in this data processing charge.
Benefits of a managed service:
All right. So this is important to touch on too because I think a lot of people forget that. Um it kind of sounds weird but it really is kind of important to have a line item for your infrastructure expense. Um it's kind of that idea of uh how do you know how to drive efficiencies into your cost and infrastructure if you don't know what it costs in the first place. Uh and this is kind of the thing that we discover when a lot of customers are doing a lot of this stuff themselves. They don't think about their cost that's associated that might be showing up in an EC2 cost instead of an actual proxy cost or something like that. And so it's just one of those things where it is important to actually see a line item or deeply understand the true cost of what your infrastructure is. And this kind of solution giving you a pay as you use model is really important because you can actually use it to drive efficiencies on the staff, predictability side of things.
The whole point of a managed service is you know, to spend less time working on the platform and more time using the platform, you know, as your scale of your applications uh goes up to meet the demands of your growing business. Speed and simplicity matter. Take advantage of the innovations that are happening behind the scenes with a managed service, right? This doesn't have to be VBC Lattice. This is just kind of generic kind of information on the benefit of a managed service.
From an operational resilience perspective. Um you know, at AWS, we take operational resilience and availability at scale very, very, very seriously. Um VBC Lattice has a 99.9% uptime SLA uh for single AZ deployments. And if you deploy your applications across multiple AZs, we've got a 99.9% SLA.
The benefit of a fully managed service: Well, it gives you the comfort of not having to worry about minimizing the downtime in the event your sidecar proxy or some sort of zero day security event happens where you have to figure out how do I actually get in front of this? This happens behind the scenes without you doing anything and then business agility, you know, VBC Lattice helps you move faster, simple period and a story. Um you know, let your developers build new customer facing products and features and becoming better at what they need to deliver instead of becoming networking experts.
Um so the whole idea is to kind of just how do we take away undifferentiated heavy lifting, right? If there's certain things you want to keep and do obviously do that. But um if there's certain things that you really don't think aren't providing value to your actual business needs, it's something to evaluate.
Ok. Enough on pricing the most. All right. Can I have more than one VPC? Sorry, can I have more than one service network per VBC? Another super common question I get. Ok, simple answer. No, but you shouldn't need to.
Ok. Um services can belong to as many service networks as you want. And so if you have a VPC that has super unique connectivity requirements, create a new service network and put those services in there, there's a couple of customers that are literally have a 1 to 1 mapping where they're just defining it almost like an application layer network for that VPC. So in the bottom left, top secret example, I'm showing just that right. There's a couple of services it still needs. I can build a service network for just it and then connect it is VPC Lattice only for microservices.
Really good one. TLDR. No, we don't care what it is, right? It's not just for microservices. It, it could be a monolith, it can be a combination of the two. This is the whole point, right? We wanna abstract all of that back end stuff. That's the whole point of having that front end abstraction. So what does this look like? Everything behind the scenes in VBC Lattice is an IP address? Like it's not hiding anything, it's not doing anything complicated, but it gives you an abstraction in front of these things.
So parking.com starts the migration process. I can slowly move things over modernize at my own pace and say my clients are still calling parking.com. But as I've moved the rates service over, I can load, balance it over to the rates service. When I move the payment service over, I can do that and then I can have everything else still do the default forward rule back to the monolith and I can keep it there forever or I can migrate that too, right? So this is made for whatever application you're using, it's not just for microservices.
Is VPC Lattice a service mesh? This is a, this is a big one too. Ok. So what is the service mesh in the first place? So typically a service mesh is a control plane and a data plane, the control plane is typically things like Istio, Linkerd, etc. And the data plane piece is something like Envoy, Linkerd engine x. There's a whole bunch of them out there.
Um typically you either manage or have a managed service to do the control plane piece and then you have a sidecar proxy a k a the data plane next to every single workload, right? So if there's 1000 instances of that workload, you have 1000 proxies running next to every workload.
Now, um this is good and bad, right? There's pros and cons. Uh it does handle a bunch of really cool application layer tasks. Um you know, service discovery request level, routing, encryption, authentication. Um really neat handy things. Typically it only works with Kubernetes. Um there's some ways to do other things um but it can be challenging.
Uh the other part here that I call out is that it doesn't handle network connectivity. It really is kind of that overlay that lives in the systems from a cost perspective. Uh what you're paying for uh doesn't really have a line item, a lot of the time you typically pay for whatever the control plane is, whatever you're running there. Um on EC2. And then you probably pay for the data plane also on EC2 next year, workload as your workload scales up. Guess what? So does the proxy, the processing and memory that you're doing is scaling up right beside your workload. And so the challenge there is you are paying for it in compute cost.
VBC Lattice. So a little bit different, it's a fully managed service. Both the control plane and the data plane are managed for you doesn't live in user space. It's built directly into the VPC. It scales completely independently of the workload. And that really matters from a performance perspective because as your workload scales up instead of latency increasing with your workload, VBC Lattice has a flat line and stays consistent because it doesn't live next to the actual workload as it's scaling up and needing more memory resources.
Um it handles all the same kind of common tasks, right? Service, discovery request level routing, encryption, authentication also does authorization and it also does the network level connectivity. So that is a big difference, right? It's both of those features and functions. And again, like we covered, the model is slightly different from a pay perspective.
Um you pay for service usage, not for how many targets running a sidecar uh that you have. So the tldr the summary there is no, it's not a service mesh, but some of the features are the same and it's implemented a little bit differently. It's built directly into VPC instead of your compute environment, your user space and sidecars are optional. If you want to continue using a sidecar to do certain tasks. If you have like the need to offload certain features and functionality, you can totally keep doing that. We're not saying don't do that. It's just you don't have to, right? And for a lot of customers, basic requirements, you shouldn't need to use one. So why do that extra complexity? But if you need it, you can definitely use it. It's a fully managed control plane and data plan and it works across instances, containers and Cerberus so very, very flexible there.
All right. Top architecture patterns starting small. Ok. Uh everybody's been here at least at some point. Um and this could be even big companies that are starting a new application or starting a new product. Um everybody starts here somewhere. The thing I'd like to call out here is that whenever you hear all of us talk about VBC Lattice, we're usually talking about it as uh something that solves all your massive connectivity problems across VBCs and accounts. Uh but I want to emphasize like it, it actually helps the small user too by reducing complexity because it's just a load balancer, it's just a proxy. And so you can greatly simplify even your app to app or service to service communication in a single VPC. You don't have to have multiple VPCs.
Um it's the quickest way to get connectivity, application level, load balancing, authentication, authorization, so on and so forth without any kind of subnet interaction or anything like that. So it's a very kind of cool use case that just kind of makes that simple.
The other part could be the same application, but maybe this company's acquired another company or something like that, you're scaling up. Uh this is where the multi VPC stuff comes in. When you do add more applications across VPCs and accounts your operations, your process to onboard things doesn't change. It's the exact same and you don't have to realize anything different, right? You get that same benefit without changing your infrastructure without introducing no connectivity patterns between VPC and having to worry about. Am I getting the subnet routing correctly? Right. It just works out of the box.
Why is VPC Lattice a good therapist? Well, it's because it's great at addressing problems, emphasis on the dressing. I'm glad at least one person laughed. That makes me feel better.
Ok. Um so we talked about this earlier. The overlapping addresses in IPv6 migration is top of everybody's mind for all sorts of reasons. We would love to see everybody in IPv6. I just have a feeling that it's gonna be 20 years from now. We're still having the exact same conversation we had 20 years ago. So this really helps out here, right? If you were talking service, service, communication, east west communication, I don't want you have to worry about this. I don't want you to think about IP addresses. This completely removes this need and it also allows you to abstract if some services move IPv6 because it's really important for them. Maybe it's a public facing service, they can do that don't make the client do it at the same time. Right. Like, it could be some client that hasn't been funded in five years. Right. And, like, do you really want to spend all this effort upgrading that when it's not even a really useful product? Right. Like, let these teams make their own priorities if it makes sense for them to go. IPv6. Cool. Let this kind of handle the rest.
Now again, I'd want to be very clear that I'm not advocating IPv4 is the right way to go. It's just this can ease that migration and you can use this as a strategy to really make you move fast.
Um if you want to hear more about this, uh we did a great podcast with the Cables to Clouds folks. Um I don't know if you'll be able to pick up this QR code or not, but if you can't just let me know afterwards and we can figure it out. Um but that podcast covers it and there's a great demo that uh is also done where it shows you how to do this through the whole configuration and everything migrate at your own pace.
Ok. This one is killer because at the very beginning, everyone was stressing out. I want to go back lattice, but I have an existing infrastructure. How do I do this or I already have implemented the service mesh? How do I do this? Um really easy actually, turns out you can run these things side by side. It's just standard protocols. So in this example, I've got a travel service living on the right hand side, I think it's the right hand side for you too. Uh living behind an application load balancer. Ok? And I'm connecting to it through transit gateway. It's just a standard DNS request travel.myapp.com
You're 10.00 to 1 which is the load bouncer. IP and I get routed the route table entries and transit gateway over to that destination.
Um now you can go and turn on a VBC. Lot of service without it doing anything. Like you just making a service of VBC, lot of service doesn't change anything. All your other stuff still works. You can still connect to the DNS 10.0 to 0.1. It's just that we've given a new DNS address so that if you want to connect to this new address, you can.
Ok. So what does that look like now? So I've created the service, I've put it in the service network and I've associated that VBC with the service network, that service network at this point. I'm still using the load balancer, not changing anything, but now I want to change that, but I only want to change it for one VBC. I don't want to change it for everybody because I'm not ready yet.
And so what I do is I put a private hosted zone out there in VBC two. And I tell it that I'm aliasing this DNS record the travel dot my app.com to the VBC lattice service instead of the DNS service. And that greatly simplifies it. You can keep both up and running for as long or as little as you'd like.
Right. If it's all VBCs you're trying to do, you can just use a public hosted zone. You don't have to use a private hosted zone either. This is just a very flexible option that gives you a lot of flexibility there.
So summary of that is kind of, you know, if you see how this works with your existing architecture service discovery is nothing fancy. It's DNS, the real power of it though is that the DNS is local, you're not using DNS as like a fail over mechanism or something like that.
Um so you're not really super concerned with TTLs and things like that, the IPs could change. So you still want to use DNS. It's just, it's not very frequent, you can migrate clients and services independently.
Um this is just an abstraction, right? So you can just do this at your own pace, you can do it for some VBCs, you can move services and clients independently, whatever makes sense to you, external connectivity, I'm gonna kinda pick up my pace a little bit.
So ingress another super popular topic. This one's great. So um a lot of customers super common pattern is I got a lot of VPC, SI got a lot of applications living in their own VBCs. How do I do central ingress? Because dealing with internet gateways on every single VPC is kind of a nightmare. How do I do this?
And so customers have moved to this model of having a centralized VPC and handling ingress. there. Same thing works here. The only difference though is that VBC lattice does require the connection to come from a VPC that's been enabled for VBC lattice.
Ok. So we have to have something there and that's what I'm showing here. We've got an Elastic Load Balancer, something like an ALB or NLB both work just fine. And then I've got some sort of EC2 or something running a proxy. This can be your favorite proxy of choice. It can be a static proxy that just has an Auto Scaling group. Just turning the request around. I have to have something there today.
Ok. So client on the outside connects to the LB. It runs to the proxy that's just making a connection. And then all of that connectivity is solved for you. You can do your authentication, authorization and literally cut off all ingress points to the actual application VCs.
Ok. Another really popular example, can I put a firewall in front of it? Absolutely. Like this is just typical VPC architecture. You could put your firewall, you could use the 80 West Network Firewall. You could use your favorite firewall vendor of choice if you use it with a Gateway Load Balancer and it just works out of the box 80 Bush Shield, the WF same thing just put that right in front of your ELB. And now you have that on ingress as well. This is fully supported today.
What about multi region? What about ingress where I might want to dial connections from one region to the other or I might be doing it for availability or performance reasons. I can do the same thing with Global Accelerator. Global Accelerator is integrated with ELB, I can do that same exact architecture, cookie cutter, replicate the service and service network in multiple regions and then use Global Load Balancer sorry Global Accelerator to shift that traffic around as you see fit if you're interested in this topic and you're gonna see a couple of these as we kind of wind down if you're interested in this topic.
Uh definitely check out the blog by Adam and Pablo uh colleagues of mine that just did a fantastic job on uh writing about this architecture pattern. They've got CloudFormation templates in there that will actually spin this up for you uh with ECS Fargate and NLB and the ECS Fargate task is just running a very, very simple and basic Nginx proxy. So out the door, you can kind of do this, this same design is also pretty popular for multi region connectivity.
So if you've got connectivity from a VBC in one region to the other, doesn't have to be a user outside on the internet. So definitely check that out.
Another pattern is the serverless model. And so this was actually from a customer uh during our preview that uh was trying to solve this problem. They were just like, hey, I'd like to do ingress and they did this on their own without saying anything. And I was like, hey, how did you do that? And they're like, oh, I just whipped together an API Gateway because I didn't want to have a VPC on the front with an internet gateway and I had a lambda proxy and that lambda proxy is just a lambda function and it has a, it's a private lambda function that has a ENI in a centralized VPC. Even the ingress VPC doesn't have an internet gateway on it like super, super powerful.
Um and the cool part here is that lambda proxy can be whatever you want. So we have a couple of examples uh in a blog, I'm gonna show it to you our code in just a sec, just a second. But you can do header manipulation here, you could do whatever you want here. Like it's very, very powerful to be able to do something like this and it's completely serverless ingress.
This is the blog that um talks about it. Definitely recommend checking it out. It's relatively new. Um but it's got a lot of traction so definitely worth the read on this one again. When these are posted, uh you'll be able to get them if you're not snapping the pictures on time.
Ok. That is the end of my kind of architecture uh patterns. I do want to leave you with a couple more um follow up items uh really to kind of show you like, how do we get a hold of us? How do you keep your education going? How do you get started? All that kind of stuff? Um just to kind of wrap things up.
So we've developed a whole bunch of workshops in Workshop Studio. Uh this is a fully guided workshop so you can go in, you can play around with the labs. We've got ones for ECS, we've got ones for EKS, we've got ones for Lambda. I think there's a bunch of different ones in there that will actually walk you through how to do it. So you can get your hands in there, kind of feel it and see for yourself sometimes it, it's, it's useful to kind of do that kind of stuff. So Workshop Studio link is in that QR code. I think there's four of them in there today. We're adding more over time.
VPC Lattice blogs, there's a ton of them out there. I highlighted a few of them. Uh these are the ones that. I think that, uh, we've kind of talked about today, uh, that might be good for reading that go a little bit more in depth.
Uh it's always that balance, this is a 300 level talk and I'm like, I'm gonna go over some folks' heads and I'm gonna go under some, some other folks heads. But, uh, the one that we didn't talk about, so there's the IPv6 adoption one that shows you how to do the migration. You could pair that with the Cables to Cloud podcast.
Um you've got the uh Building Typical how to build VPC lattice connections on the top one and then the bottom one is a really interesting one and it's how you actually integrate VPC lattice with your VMware environments. So this one is a very purpose built and kind of uh interesting solution that if you're dealing with VMware workloads, it's a great way to kind of get started.
I also want to highlight um a couple of uh our more popular um videos we've done online. Uh I've put together a YouTube playlist. I'll be adding things to this over time. That's the one on the right. Uh there's a whole bunch of things that I've just found. Not of them, not all of them are from Amazon too. There's a bunch of a bunch of other like podcasters and people that have just put together the loudest demos that I thought were really interesting. And so I put them in there and I'll keep adding stuff over there over time.
The one on the left is uh the Routing Loop on Twitch. And if you don't watch that already, even if you're a developer, you should go watch it. Really funny. Uh moderators on there, really good uh selection of content that they have on there. But this one specifically is a really powerful one. There's a blog we just released a couple of days ago. It's not the QR code for that, but it's a way to actually use tags um to remove the humans a little bit and automate all the service associations and service network to VPC associations. So that's definitely a video you might want to see where we're showing a demo of how you can do that. And now there's a blog out there that shares the code of how to do it as well.
And then last but not least, um the Gateway API controller is now GA. So this is for Kubernetes workloads. Um it's an open source controller. You can use this to automatically do all your VPC Latus stuff without actually learning VPC lattice. You can just use the native Kubernetes APIs to go and do all this stuff for you.
So definitely join the community. Um we'd love to see any issues that you find issues being like if you'd like to see new features, new functionality, you can go ahead and create a PR and we'll definitely be watching that. And if you're interested in kind of getting an overview of the Gateway API controller, there is two blogs dedicated to just the API controller and Kubernetes integration by itself.
Ok? And lastly, um there is one session I would recommend uh if you're hanging around, if you're one of the hardcore crew, definitely go to Alex and Matt's talk. The wizard talk uh in breakout session format is always on Friday and it's the Advanced VPC design and capabilities.
Uh Alex and Ma did a fantastic job the last couple of years and they always do a good job. VPC lattice will be highlighted in that as well as all of the other typical VPC networking services.
Um and so definitely want to check that, that out if you're still around, that's NET 306.
So thank you. And uh please remember uh let us know how you like the session and uh even if it's just to say hi.