Adding AWS (backbone) to your network

so hi everyone.

Welcome to our session net 319 adding aws back to your network.

I, we do really hope that you are having a lovely first day here at dream event and that you have decided to spend the next hour or so with us.

So we are assuming that if you are in the room today, it's because you're either a cloud infrastructure engineer or you are a network engineer breaking into the cloud or you're just curious about networking.

So we are going to be catering for all of these personas in this session, right?

So in this session, it is going to be mainly about the backbone. uh we are going to talk about the infrastructure so the the infrastructure behind the aws background network, but we are also going to touch on some of the core principles and tenets that we have used to build this network.

And then we are going to move on into some of the overlay services that us customers use to leverage this infrastructure in the background, right?

We are also going to be talking about some of these common patterns that we see when we talk to customers um just putting together different multiple services or tools from the toolbox, that is our networking portfolio.

So you can build the right pattern, the re the best architecture for your business, right?

And then we're going to also have one of our customers sharing uh their case study and then how they have modernized their network using aws networking services.

Now, this is a level 300 session.

So we are expecting that you have some foundational knowledge on networking services protocols, but also on aws fundamental concepts, right?

So you are familiar with the cloud, at least either way i'll be touching on, on like refreshing your memories.

I know that there is a lot of content, a lot of services out there.

So in the first section of the session, I'll be touching on some of those fundamentals just to bring all of the audience to the same level.

Cool.

So my name is Ernesto Salas.

I'm a senior network specialist and I've been working with aws for around four years, basically helping customers all doing all things networking in aws.

And I'm joined today by my friend Corina Motoi.

She's a senior solutions architect working for the hybrid cloud team.

And we also have Nick Mullenix from Fort.

He is going to be talking about their use case on aws.

Cool.

So i wanted to start the session intentionally with this cheeky picture, right?

Because it depicts what i spent most of my days talking to customers.

And this is the summarization of most of my discussions with them, which is they are moving their workloads to aws.

They are currently ongoing um on workload, migrations at the moment or they are finalizing their migrations and their data center footprints starting to look a little bit like this.

So it's starting to shrink, their collocations are starting to shrink as well.

And then the data center today looks a little bit like four racks back in the corner that are hosting just hardware devices like specifically networking, hardware devices.

And when talking to customers, we have seen how as the gravity of these workloads move to aws also the gravity of the network moves, right?

And we have seen and we have been helping customers like leverage the aws background network to interconnect their workloads.

But more importantly, now we have seen how customers are using the aws back background network through some of these services to interconnect also their branch offices.

So in the agenda, i'll be covering our infrastructure.

So how we have built the largest and most scalable network in the world and the tenants behind behind how we did it and also why we did it and how we took some of those tenants.

And i applied it when building this overlay services.

Then i'll do a quick recap where i mentioned that we have multiple services within our portfolio, around 40 plus features and services within the networking and content delivery portfolio, but we do have some common patterns that a lot of customers use today.

And i just wanted to quickly go through how we have applied those principles to build these services.

Why?

And then pass it on to corina to talk more about all of these different patterns and how different services come into play.

Right.

And then at the end, we'll have nick covering their use case for, for let's get started uh a map of the world and all of our global infrastructure um over it, right?

So when we talk about the aws global infrastructure, we have mainly all regions.

So aws regions and without going too much into detail into what the region is, you can think of it as a geographical location where you will find at least three availability zones or more and then within an availability zone, you will have one data center or more.

So you can think of an availability zone as a cluster of data centers, right.

Then we have local zones and you can think of the local zones as an extension of these regions into heavily populated areas.

And we are trying to solve basically a networking problem with local zones which is bringing the application closer to the customers and reducing latency.

Um another way of thinking about local zones is just an availability zone that is closer to customers, right?

So in heavily populated areas, then we have all points of presence and that would be a combination of edge locations and direct connect locations.

So for edge locations, we're talking about cloudfront or cd n for example, or aws global accelerator.

And for data connect, it's a service that allows you to extend your private connectivity into aws, right?

If we zoom in quickly into how on a very high level the network topology for a region looks like this is what you would find, right?

So at least three availability zones within each availability zone, you'll have one or more data centers and in each one of those boxes, you will have that you'll find that we have uh hundreds or thousands of networking devices and hundreds of thousands of links in between them.

It's providing you like a highly available um connectivity and architecture within the region.

And we do have like based on your feedback from previous events, we have been releasing more and more content that where we do deep dives into the specifics of this architecture.

But this is not what we are going to do today today, we're going to talk about the b.

So coming back to the map, i think it looks even more impressive when we overlay the backbone network on top of it.

So each one of these lines that are highlighted there, you can think of them as a pair of, of 400 gig connections or fiber cables that we manage and operate globally.

So this is all a private network that aws has built, manages and operates, right?

Or what?

So very early on when we started building our cloud business, we decided that to be able to pass on to you as c as our customers, the availability, scalability, performance, and security that you were expecting from us, we had to have full control on our network.

So very early on, we took the decision that we wanted to build our own global private network and that led us to also taking the decision of starting to develop our own custom hardware, custom software.

So usually we talk about this most common example would be the nitro platform that we use to power our e ec2 services and vpc for example.

But this is also true for the networking hardware or networking devices, right?

Because you can imagine at the beginning when we started working with third party um traditional vendor providers, we were using our justice platforms, right?

Uh the typical refrigerator size routers that you have probably seen.

And these are amazing pieces of technology.

They are amazing at what they do, but we were facing a new challenge, a challenge that no other company had.

And these pieces of hardware uh had a lot of features because they were trying to cater for a lot of customers with a lot of different use cases.

We had a very specific use case and it was very specific to us.

So we decided to move away from this complex chassis platforms.

And you just to put as an example, like putting my operational hat on, think that if one of these chassis would need a reboot, that would be a very large task, right?

And at our scale, we could not afford like having reboots at that scale, right?

So we could not reboot an entire chassis.

So we decided to start venturing into developing and building our own hardware.

And these hardware are typically single chip network devices, right?

So you can imagine it as a one rack unit, uh ip switch or router that we would use across all of our networking stack.

So the same device could be used as a top of rack shit.

We could use it in the backbone, we could use it in at our edge and we would have different flavors of the same device.

The flavor would typically be a different speeds, right?

So those tenants are of simplifying, we also applied it and we started creating cells, right?

So this is where the messaging of what you are probably more used to when we talk about ec2 networking.

When we talk about vpc or the hyperplane technology behind traffic gateway, for example, uh we talk about cellular architectures, right?

And then what we did is that we put all of this one rack unit switches or routers in a single rack and that would create a cell.

So if we needed to replace a cell, that was faulty, we would just remove it, send a new one and we would make, we were making sure that within that cell, if there was failure, it was not leaking into another cell, right?

So this is true across different stacks in our network, including the backbone.

Now, if we had to add special models, because we say that whenever a packet leaves or facilities, we encrypted at the physical layer.

So all traffic living aws facilities are encrypted at the physical layer.

Then we would add that special model into that device that would be sitting at the edge, right?

And that would lead us to our core tenet.

So this is our core tenet or principle that all of our development teams across the infrastructure teams or the infrastructure organization, but also the overlay services or the the networking services that you consume today as a customer follows, right?

So these are the guideline principles, which is simplicity scales.

You best example for that with the vpc.

And how at the very beginning when we launched vpc, we really thought that customers were only going to use one vpc and that was actually hard coded in our system, right?

And if you need more than one v bc, you have to give us a ring, you have to give us a call and we started to receive a lot of calls because customers started to use vpc to segment their business unit for example, or the applications.

So going basically into a multi vpc environment.

But that brought his own challenge, which was like, how can we connect our bp cs together, our applications together?

So we decided to launch a amazon vpc pinning and appear within a small environment.

You can think it's a way of connecting two vpc s together.

So it's a 1 to 1 point to point connection, but it's non transitive, right?

So this is one of the, the foundational rules within aws network is if you use v cbc bbc p, you would have to connect two vpc s together and you cannot use a vpc as transit between um multiple pinnings, right?

So this the solution in the service works very well even today.

And you can use it if you have 1020 vpc s, it's completely fine.

It's it's very manageable.

But, but if you start scaling to the hundreds thousands of vpc s, then it gets a little bit out of control.

So in 2018, we decided to launch trans gateway.

If you are not familiar with trans gateway, you can think of it as a regional router.

Now in the background, it doesn't look anything like a router, but logically, you can think of it as a regional router and trans allows you to uh concentrate of your connectivity within a region and it scales to the thousands of up cs.

So we were solving that scaling problem.

Also transit gateway allows you through integrations to extend that connectivity to your own premises, right?

So either through an integration with direct connect or integrating with side to side vpn s.

And this that you are seeing on the screen right now became one of the most common patterns that we see with customers.

This is super um common pattern is the best practice for most of our enterprise customers.

And we see this being applied by many of the customers that we talked to today.

But as you move away from your data centers, you are not bound to that physical location anymore, the geographical location and you could start leveraging the multiple aws regions to bring the application closer to your end customers.

So that's what customers are starting to do.

And therefore they started replicating that very same pattern into multiple different regions.

So if you have three regions, we go back again to the cross-region p connections and to the point to point connectivity.

Now as this scales, we go back to the same challenge, right?

So how can we manage like 1015, 20 regions, right?

it starts to become a really complex network, especially if you don't have dedicated networking personnel to manage this global network within your cloud environment.

So last year during reinforce, we announced aws cloud one, the service belt on the same technology as transit gateway.

And it simplifies connectivity at a global scale.

We also introduce uh our new vision for networking services, which is intent based networking, what you would tell us how you want your network to look like and we will build it in the background for you and having backwards compatibility with trusty.

So this combination of what you see in the screen is where the most common pattern using trans gateway that connects side to side vpn uh integrates with the new version of doing global networking in aws, which is aws cloud, but these are just some of the services that we have, right?

And as i mentioned a couple of times by now and probably you are aware as a as a customer um building networks in aws, your environment is going to consist of multiple different networking services coming together.

So it's going to be network load balancers, it's going to be uh side to side vpn s or client vpn s going to be transit.

Maybe you're experimenting with some of the new services like amazon vpc lattice, right?

All of these overlaying services are a way for you to consume or underlying networks.

So you can think of them as tools right, within a toolbox.

But what we want to show you is how you can use multiple different tools that uh are the the right tools for the right job.

And that's only going to be relevant within your context.

You can have common patterns, but it doesn't mean that you have to take that common pattern and just implemented as is in your environment, right?

Also, we have other nonworking services like aws local zones or amazon ping, which are a little bit more niche.

But there are also ways of leveraging the aws backbone network, especially.

So as i mentioned at the beginning, uh corin and myself, we spend a lot of time talking to customers and by talking to customers, we get to see and put together some of these common patterns.

So corina is going to walk you through all of them.

Thank you.

Hi, everyone

Indeed, we do talk a lot with customers and partners. So I will introduce you now to Unicorn Dot Inc. Let's have some fun.

So Unicorn Dot Inc, it's our customer, a global customer with offices in uh New York data center, the headquarter and also in Mumbai again on the headquarter there, their offices, their own site locations are connected via their global one, which is basically a mix of network technologies like MP LS internet services, you name it.

Um they are used to connect other locations but they are starting to scale. And what they care about is the experience of the end users, like all of us like all of us as well. We care about our end customers, we need to provide them the best possible experience.

So for that reason, how can they scale while improving latency while improving the performance of their application? Well, moving to the cloud to the AWS cloud because of the proximity they choose to use the Virginia region which is closest to uh to the New York headquarter and then the Mumbai region which is closest to the Mumbai headquarter, of course.

And um once they are in the cloud, they now have a mixed environment hybrid environment. The next question is ok. How can I actually me unicorn do think improve the experience of my end user.

So let's have a look how we can do that using the AWS backbone. That's the session about, isn't it?

So again, we have this unicorn.in architecture, mixed architecture, cloud and on premises location. So to make it more real, let's assume that unicorn dot inc, it's a gaming company and their game unicorn assembly is so popular that users and gamers all over the world, they want to play it.

We have mexican users, we have australian users and everyone wants to play it because it's super popular. You might know some of those games.

Um but they are accessing the game which is deployed in the AWS region via the internet, mexico to uh to north virginia or mexico, to mumbai. Well, you might imagine there might be some unreliable connections there.

So then what can we do? The users, they want to have great uh experience when accessing the the game, they want to have fast response when they are trying to play. So let's see how we can improve that user experience or gamers experience whatever you want to call it for that.

Let's look at the AWS global network as ernesto was saying, it's a private backbone that interconnect all the AWS data centers all around the world and DS locations with a 400 a fully redundant four hun 400 gigabit internet fiber, this web, sorry, i keep looking there.

This web of fiber cables and network devices basically is created to facilitate ultra low latency uh across the globe between all the AWS locations that in connection with the our 550 plus points of present, which we call the edge pops, they work together in order to improve and to bring services closer to the end user so that you can have better uh better experience better performance when you are interacting with deployments in the cloud.

So let's see how they do that. As you might imagine, there is not one case that one product, one service that fits them all, we all have different experience, different experiences.

So let's go to the first one and let's say that our players, they just want to access the gaming, um the game in the cloud for online, for online playing and or in game purchases, whatever and they want to do that fast as, as best as possible.

So let's see how they can do that with while accessing via the internet. But using the AWS global accelerator, basically a double s global accelerator is one of our services available in o in most of our pops and what it does, it simplifies traffic management, especially if you have multiregional deployment and improves the performance of the applications serving accelerating the delivery of dynamic content with the AWS global accelerator, you take the traffic out of the internet through the 110 points of uh presence where the global accelerator is present and you move it into the AWS backbone.

This way, the traffic then is rooted via the AWS global network through the original end point that is um basically gives reducing uh inga latency, jitter and packet loss all win for both the company and the end user.

Um AWS global accelerator reduces uh chooses the optimal AWS region. So this is especially important if you have multiregional deployment based on the geography of the end customers and that helps us deliver uh well helps unicorn, i think deliver the performance with uh with a better than increased performance of 60% which is quite a lot.

If you're comparing with the basic internet um internet accessing, then since um the global accelerator operates at the level four of the o si model, that means it can be used with any t cpo tp application which is quite a plus and um one thing and that is probably quite important.

The global accelerator has multiple applications and can, can have multiple applications and points in the cloud. Those can be uh ec2 instances can be later load, balancer application load balancer elastic ip s but all of those are going to be fronted by um the same a any a t ip address the moment you are creating the global accelerator.

Uh basically, you are receiving two static i pv four nat addresses. If you choose to have a dual stack global accelerator, you will get also another two i pv six static an a t ip addresses. Those ip addresses are then announced across all the aws, all the global accelerator pops and accept those that incoming traffic coming from the from the end user via the internet into the aws global network to the into the location, which is the closest to the end users once on the aws back the traffic uh then travels via the optimal network pack to the end point in the in the region.

But um how about if i'm more interested in improving my user's experience in reducing the time for downloading the content or just feeding the proportional uh promotional content?

So let's have a look how the amazon cloudfront comes into play. Amazon cloudfront. Probably if you haven't used it, it's our uh fast um it's our service for fast static and um dynamic content delivery by leveraging the aws global network.

Um amazon cloudfront basically employs a global network of edge locations and original edge caches that catches the content closer to your viewers. But besides uh static content also AWS cloudfront dynamic content TCP flows.

Um so that is really good to know. Not many people are using for that and it's really, really good.

Um also AWS cloudfront do has do have persisting connection to the origin. So let's say um if the request um are, if the content is not present in the pops, then the connection doesn't have to establish a new tcp uh tls connection to the origin. It just uh uses the persistent one.

Additionally, AWS with the AWS uh with a double sc amazon cloudfront, with the amazon rose 53 together. They uh they can create some really cool architectures uh for fail over when you're having a multiregional uh deployment, basically high level, how you use the service, you create a cloudfront distribution and then you configure the origin with a public uh accessible domain name.

The origin is where the content is stored and the cloudfront basically gives content to, to serve the viewers. If the content is not available in the ops, then we are using the regional edge caches which are sitting between the ops and the and the origin and has bigger storage.

So it can be closer to, to the um to the user. So you don't have to go back to the region via the uh a double s backbone to, to serve your customers.

Um to summarize amazon cloudfront, improves the experience of uh the user by using the aws backbone for improved origin fetches and dynamic contact acceler acceleration.

But how about all is good with AWS um global accelerator and amazon cloudfront. But how about if your game also provides a rvr experience, your users needs to have single digit millisecond access to, to, to your uh to your game.

So let's have a look at the AWS local zones.

Um AWS local zones is an extension of the AWS region in the geographic pro proximity of your users.

Um currently in this moment, AWS has 33 local zones all around the world deployed in the metropolitan areas. And uh for this for the purpose of this scenario, because we have gamers in mexico, we have chosen the queretaro. I hope i'm pronouncing it correctly.

Um local zones in mexico. And then uh for the users in australia, we have chosen the perth local zone in australia. Why? Because are closer to those users that we, we think are going to access um our, our a rvr experience in, in the cloud.

So basically the local zone is physically connected to a region via the AWS um via the AWS backbone.

Um how does it work at a logical level? You enable the local zone in your account and then you just extend the VPC in from the region to your AWS local zone and then you can start building in the local zone closer to your, to your uh to your customers this way.

Uh all your customers can benefit again, single digit millisecond latency, um accessing your your deployment in the local zone.

Um to improve latency, local zone, they have their own connection to, to the internet in with the i sps in the metro areas. And they also support uh direct connect.

So resources in the local zone can be accessed with really low latency.

Uh also they have uh ip pool of i an ip a pool of ip addresses which is totally different than the one in the region.

Um to sum up AWS local zone basically delivers um um delivers low latency by placing services such as computes such as storage and other dedicated services closer to the end user, but is connected via the AWS backbone to the region, having access to all the other AWS services.

We have talked about how can we improve the user experience? Let's talk a little bit more about how we can improve you as a company, how we can improve your infrastructure by using the AWS backbone.

We're going to discuss a couple of scenarios and we're going to start with the most um common. I would say, how are you building hybrid architectures? And what is the role of the AWS backbone in all this?

Again, we have our unicorn that in. So let's focus now about connecting um the on premise side of on premises sites to the deployments that you have in the in the region for the first scenario because we're going to discuss a couple of scenarios.

Let's assume that you are a company unicorn dot inc, which you require very high throughput connection to the AWS regions to the to the resources that you have deployed there.

Well, the good news is the fact that you interconnect, you can interconnect directly with AWS global network via ping without using public internet using our amazon peering.

Um uh you can do that in one of our 98 public internet exchange points, but also we do provide also private viewing in other selected uh location. If you want to, to find more about it, about how uh our interconnect works and how you can interconnect with us. Just check the amazon.com profile on peering db.com. You will find out all the details basically just as a high level on the amazon pairing, how it works.

So in the internet exchange pops, we have all the networks, well, all the networks which are interested in pairing there, we have our own routers which have access to the global network. And basically, if we want to interconnect, we are all connected to the same fabric and via b gp, we are exchanging um routers routes between us.

What is important to know is with amazon peering. Amazon selectively um chooses routes to, to deliver at these locations depending on the proximity to the region, depending on the service endpoint, different other factors.

So it's important to know that even though it's a great way to bypass internet and go straight to your resources in the region. Using the a as global network. Amazon peering is not a service offering, but you want the service offering, you want a connection which is reliable, which can provide you an sl a if needed.

Then let's have a look at AWS direct connect. Basically AWS data connect offers you the shortest path from on premises to the resources in the cloud through a dedicated network connection from your site via direct connect location and via the a double s global back one.

Um the AWS data connect locations. If you are, if you are using AWS direct connect today, you probably know there are third party uh colo location facilities where via cross connect between the AWS home device and the partner or customer uh device.

Um the customer get a gets access to all the, the AWS backbone so they can route to the to the region wherever they have the resources deployed as a customer.

Basically, you can choose to have your own dedicated connection. Um and that can get you uh dedicated links of up to 100 gigabits per second or you can have a connection from one of the partners which we call hosted connection and that can give you connection up to 10 gigs with AWS data connect.

We also have the concept of data connect gateway, which is a globally available resource and you can use the AWS direct connect gateway to connect AWS direct connect.

Um connections over um to connect your own premises over a direct connect connection to the resources in the cloud.

Using a private v I or a transit v or you can have also public d on the direct connect connection which gives you access to the public endpoints.

Please bear in mind, direct connect, gateway doesn't work with public VS, only with private and transit VS just to be, to be clear on that.

So now we have talked about how you can go into the AWS global network and get your resources in the cloud from the on premises.

Then how about if you have a multiregional deployment, which i'm sure most of you probably have today, but you have a low number of VPC s.

Well, for that case, you would probably have a private de on your data connect connection which goes to the direct connect gateway. And then you have the virtual private gateway from your VPC s being associated with the data, connect the gateway.

And the question is ok, how am i connecting the VPC s between them in different regions? And for that, basically, you have the cross-region VPC pairing which um when you establish that pairing connection between your VPC s from different regions, you ensure that the traffic remains in the private ip space and always stays in the AWS backbone.

So using cross-region b pc, ping, the traffic always stays on the global AWS backbone

So that is secured and um we manage it. Um, what is important to know, and also Ernesto mentioned is the fact that with um cross-region DPC ping, the is not transitive. So for example, in this case, if you don't have a connection from Mumbai to, to the Mumbai region via the direct connect and you want to go from New York to the, to the Mumbai region, to the VPC s in the Mumbai region, you won't be able to, to do that natively. Of course, there are workarounds, we all know that, but just keep in mind that uh cross-region VPC pairing is not transitive.

So a better way to do it is also suitable for a lo for a multi for um for uh for a more VPC s for more VPC s deployments. Sorry. Um it basically, if you're doing that, you're having a transit gateway because you need to connect all your VPC s together. So with the transit gateway, you would probably have on your direct connect a transit v i that is being um attached to the dirac gateway. And then the transit gateway is being associated with that dirac gateway. And in order to have your VPC s communicate to each other, you can have interreg transit appearing um as with uh cross um region VPC pairing.

Also with interreg transit gateway pairing, the traffic stays on the global AWS backbone. So from one region on to the other stays in the a double s global backbone from the management point of view with the, with the interreg pairing, the root entries that you will have to configure a non transit will have to be inserted manually. Just a piece of advice when you are configuring your transit gateways in different regions, just make sure they have different scents in the future. Maybe we're going to discuss about having the rules being dynamically propagated. But once you are configuring a transit gateway and you're choosing an essence, you can't update it afterwards. So just make sure you have different essence.

Um we've talked about the hybrid architectures now, a very cool use case for the AWS global backbone is how you connect the on premises site totally bypassing the AWS region. And for that, we have our unicorn doting and we want to connect New York with Mumbai and to create that reliable transit network connectivity. And for that, we're going to use a double sect site link which basically connects on premises sites, bypassing the region. That means you don't have to have anything in the region. Nothing at all. All you have to have is basically you have to use direct connect and you will have to create uh VS on the direct connect. And I say VIV because you can have private VS or transit VIV and when you create them, you just have a tick box and you say, ok, i want to enable cling on that particular VIV very easy and very nice.

We can even add a third site to make it more interested interesting. And actually we can add up to 30 interfaces and attach them to the dirac a gateway. That means you can have up to 30 different on premises sites, all of them using um direct connect and you can, you can use site link for that. Basically, once all those VS are attached to the same, that is important to the same direct connect gateway, you start sending data between them. Your data follows the shortest path between the AWS direct connect locations to its destination using the AWS global backbone.

You can have private V I and transit VIV again, public V doesn't work with the dirac gateway. And again, connectivity remains in in the AWS global network. So we silin data gateway basically learns b gp i pv four i pv six. So both are supported uh from your routers over the silent enabled VS runs b gp uh best path algorithms updates um at drive like next hub or um uh is pre plan and then advertises this b gp prefixes to the rest of your styling enabled. VS after receiving this advertisement from the dia gateway, your routers update and is rot and forward the traffic between the sing enabled all locations using the shortest path between AWS direct connect locations over the AWS backbone.

What is important to know is the fact that you can use that functionality and transform the AWS global backbone into your primary backbone or just a backup to your to your current existing backbone. And another use case, which is quite uh quite common with sink is the fact if you are expanding, it's just uh it's just easy to add new locations as the business grows.

And that takes us to our last scenario, building your SD one hub on AWS. Uh again, we have unicorn dot inc and unicorn dot inc is running SD one on the on premises. I don't know how many of you are doing that, but we do have customers and what they are asking us. Ok is how can i integrate whatever SD one i'm running on the on premises with the AWS cloud because now i have delo deployment in in the AWS cloud.

So what we have seen our customers doing is basically they deploy their SD one appliances in the AWS in a VPC of its own, let's call it SD one appliances, DPC. And then they connect their own premises via data connect private VI a double S global backbone using data connect gateway. And then the virtual private gateway to that VPC.

We have a trans gator. We go for a scenario where we do have multiple VPC s in the cloud that trans already has all the workloads VPC s attached to it. So we attach this SD one VPC appliances to the trans gateway. We extend the overlay from the on premises to, to the AWS um um SD one appliances, VPC. Let's call it like that. But we then need to, to integrate the SD one appliances VPC with the transit gateway because there is where we have all our workloads in the VPC s attached to it.

So we are for that, we are using transit gateway connect, which is our native way of integrating SD one appliances into the AWS transit gateway by running a GRE tunnel on top of the of the VPC attachment to the transit gateway. Doing this. Basically, you have end to end management of your global network using the current orchestration platform of your SD one partner for the on premises, which is a major plus um di connect is one way on um to use it as a transport mechanism for your SD one. But again, you can use also internet to get to your SD one appliance s DPC in the um in the cloud when you're having multiple slides.

Um i just want to call out with sing, you can actually extend your SD one from one side to another just enabling uh sing on your virtual private interface or transit virtual interfaces. The question comes how about if i have multiple regions, i have multiple regions, multiple insights. Um how i manage my SD one, how i manage this integration and more important how i keep the same segmentation, traffic segmentation that i have on premises in the cloud regardless of um of the destination or the source of the traffic.

For the scope of this um scenario. Let's use um we have on premises to, to routing domains. We have production and depth test. And i want to keep those in all the regions that i'm using. Ok. We have two regions here, but maybe you use three regions, four regions, 10 regions. But we want to keep the same traffic segmentation with SD one.

Um basically, you can create different trans gateway connects for each type of um of uh segments that you have on your own premises, extend them to the trans gateway in the region in the trans gateway. As you know, we can have multiple domains. But then comes the question. Ok. And then with the trans gateway pairing, can i extend my um my different domains across the regions? Well, not quite. And the reason for that is because the traffic that goes via the um via the transit gateway interreg peering is um used by all the traffic. We don't keep different segmentation.

So then how can you have that global network across on premises across um across different regions and maintain the traffic se which is important for me because let's be serious. We all care about what is the, what is test and uh what is production, especially for that. Let's look at AWS cloud one. Basically a double S cloud one is our service as also a dimension which basically makes it easier to, to manage moni monitor a unified network, a unified network which has on premises and also and also multiregional deployment to do that.

We're going to start our migration from our multi region trans abased architecture directly to a big band cloud one architecture just for the interest of time. So we're going to start with building the global network, which is um just it contains all the network element in your global network. Then you are building the core network which is part of your global network, but are is containing only the elements which are managed by AWS. And the third most important thing is basically the coordinator policy which makes it to be really easy to manage uh your network with a double s cloud one.

The coordinator policy is using it. Um it's um using json format and you declare there how you want your segments to be what attachments to have, how do you want routing to work? Really? And we will see how this translate translates to our architecture.

Um so in the core policy, uh we define the regions that we want to have. In our case, we have Virginia and um and Mumbai and in that moment, uh cloud one cons uh builds those core network edge routers. Uh they are quite similar with transway. So we are going to build each one in um in their own region and then the attachments will be basically attached to the core network edge routers.

After that, we are starting to, to build our segment again, we had our production and our dev and let's have a library segment as well. The connection with the own premises segments are dedicated routine domains of VRFS for um better term if you want to call it that and we can define those segments in the routine policy as well. It's all online in the routine policy really easy in the interest of time. Again, i will go straight to a big bang uh migration directly to, to cloud one.

So our VPC s which initially were attached to the transit gateway, we are now going to attach them to the cloud one segments uh is cloud one supports native VPC attachments and basically the mapping of VPC s to the segments is specifying the core network policy. For example, here, all uh in this core network policy, all attachments with the t a prod will be mapped to the segment prod besides b pc attachments, cloud one also native relief supports b pc connect attachments and that allows you to connect SD one appliances to your core network edge to your cloud one.

And what is really cool is a couple of weeks ago, we have announced all some um telus. So tunnel lo protocols. So besides GRE are we are now supported tunnels? And what is cool about that is the fact that it is used for high performance. So with GRE or IPE protocols, you know, there are some packets overhead which with the tunnel protocol we don't. So for that reason, if you're using that, you can get up to 100 gigabits per second attachments, which is quite uh quite interesting if you do require that hydro.

But um so we, we have attached the VPC s, we have attached to our SD one VPC. Uh probably you have also some VPN s, you can attach them cloud one supports native VP VPN attachments. And in case you need to attach di direct connect to your cloud one. Um as of today, we don't support native attachments. So you will have to have your direct connect attached to the transit gateway and then the transit gateway paired with um with a core network in, in that particular region.

And that takes us to a discussion. So when you're going to build your architecture, think about, ok. Do i really need a big bang approach? Moving all your VPC s to cloud one or do i need um a federation approach where i can federate the transit gateway with the cloud one and then coexisting together? Because again, if you need uh direct connect, you will need to have transit gateway as well. We do have some really cool blog posts about this. So if you're thinking about this, about using cloud one, just think which one is the best strategy for yourself.

Nick: So it's really great to be here. I'm really excited. My name is Nick Mox. I'm with Fortive and this is our story about how we migrated our physical data center interconnectivity into AWS.

This might be relevant to you. Perhaps you have a physical data center connectivity presence today, perhaps you have a large operational footprint - might be operating out of multiple regions, maybe even multiple continents.

We have that same scenario. If you take a look at our diagram here, this beautiful diagram that I've architected, this is kind of what our scenario was a couple of years ago.

We have a little bit of background about Fortive. We are a technological conglomerate. We have manufacturing, we have lab presence, the whole nine yards, right? So a lot of our spoke connectivity for our spoke sites, there might be a lot of different use cases going on there.

But effectively, we provide centralized infrastructure services and we've got to allow all of our spoke sites no matter where they are to be able to communicate to the central Fortive HQ so to speak.

We had a lot of legacy technologies in these physical data centers. You've got to pay somebody to weigh hands on your tools, you have to pay for these physical circuits. It's difficult to manage and it doesn't scale very well.

So in an operational sense, we had a lot of issues with asynchronous routing - that's probably the best example. But it's very difficult to make changes at scale to a network like this, right?

In looking at this and partnering with AWS (Fortive has a great partnership with AWS and has for some time), we're leveraging a lot of the other pieces of the AWS fabric. So it made sense for us to start to look into how can we move some of the network functionality into AWS.

At first we had this construct where we were trying to take that same regional connectivity that we had, where Fortive had three different physical data centers that were on prem in Amsterdam, Seattle and Ashburn.

Taking that same footprint and trying to push it up into AWS, we came up with this regional flow. This is very heavily abstracted of course. But the important point here is that all of the traffic is gonna be flowing up towards a regional core of some kind.

That regional core is going to handle all the peering, the routing relationships and things of that nature. Whether I'm an ADM VPN spoke, maybe I'm in some kind of Direct Connect, I'm an SD-WAN spoke, what have you - eventually when my connectivity, when my routes enter into the region, they're gonna home run to our transit gateway via a series of transit gateway attachments.

Eventually we're gonna end up at that regional backbone, right? So this worked really, really well until it came time to go multi-region. That's where we found ourselves - how do we make this scale a little bit better and make this so that we can operate out of multiple regions and exchange routes?

The transit gateway peering doesn't really scale that well for us. The big missing piece that was really important for Fortive was AWS Cloud WAN. I can actually still remember reading about AWS Cloud WAN - I wasn't at AWS re:Invent in 2021 but I believe it was announced at AWS re:Invent in 2021.

I know it was late June 2022 when it hit the market and we were ready at Fortive to implement that very quickly. What that looked like for us is this:

We have our three regions. We created a Cloud WAN core network edge attachment in each of those three regions. The magic here, the secret sauce, is that now my backbone routing firewall devices are able to talk to each other directly.

I can create my BGP peering relationships and now I can exchange my routes dynamically. So this is a fully dynamic solution - if I add another spoke site, if I subtract a spoke site, if we add a new Direct Connect etc. - at a high level regionally it's dynamically allocating. It's pretty much automatic.

This is a really high level overview of this, but what this did for us is effectively reduced our MTTR for our entire network stack for resolution by 35%. That's how much ticketing, incident ticketing, request ticketing and things of that nature that we're actually able to take completely out of our environment by going to this system here.

Another thing I'd like to point out is that this is an extremely performant solution. AWS Cloud WAN - there's a couple of different use cases that I can point out to you that I was really amazed by the performance.

The first one I'll tell you about is we have three regions here, but we all have contingent workforces in a lot of different areas. For ours, we have a very large contingent workforce in India. We have a very big manufacturing footprint in Australia, right?

So with this solution here, I can take and create a new region, let's say like AP South or Mumbai or something like that. And I can deploy the same type of topology within hours. I can create the peering relationships.

I've done this in practice, I can tell you a colleague of mine, Eric Brooks, if you go ask him he'd say the same thing. We can both remember running this test where we spun up this infrastructure and we expected to maybe see some performance impact.

We're looking at maybe an MPLS connectivity from Australia directly to another site. We're thinking, well, maybe we're gonna take some additional latency and see this actually outperformed our MPLS and it felt like magic because you'd expect there's additional transit path, all this additional technology involved, it's gonna be a little slower.

But let's see if we can make this work - no, actually worked better and we had to go back and check our work. Are we missing something here, right? So that was really, really exciting.

In terms of scale for me, this makes me sleep a lot better at night because now I can extend my regional availability for all of my core network services as close to our user base as we possibly can.

We started with three. I think we'll probably have five within the next year. I know AWS is adding things all the time, probably added something while we're up here talking, right? So I assume there will be more regions all the time for AWS Cloud WAN as well.

It's fantastic, it's really very helpful for someone in my position to make things a little bit easier. More performant for sure. And the best thing for me is operationally - how much did it cost? Well, it actually saved us money.

We nearly eliminated, two thirds of our spending. So operational spending went down by 66%. It really isn't very often that somebody in my position gets to put in a new technology that worked better, that was easier to support, that was more flexible, more scalable, more adaptable and it was cheaper for my company. That was fantastic.

You know, this is, in my opinion, really the future. It's very, very flexible and I really look forward to seeing more companies go through the same transformational journey. So thank you very much.

Ernesto: 100% a lot of claps for Nick. I love his energy and we had great fun working with Nick and Fortive building and modernizing their global network. So who better than Nick to come and tell the story, right?

And similar to Fortive, we have multiple other customers going through the same transformation, different levels of maturity and different sets of combinations of services. So we tried to cover some of the most popular ones - that's what Corina covered today through these use cases.

I wanted to show you also how we do things in the background - the bits that you don't get to see. Of course this was a Level 300 session, but I will invite you to go and check some of the other 400+ sessions where we really go deep into how we have built our network infrastructure from the data center up to the backbone.

As Corina and Nick mentioned, we're constantly releasing new features and services to improve our offering and allow you to build global networks on AWS.

We have recently launched Tunnel-less Connect which removes the need for GRE tunnels to extend your SD-WAN connectivity into Cloud WAN, facilitating the creation of global networks if you have a preferred SD-WAN provider.

We also have more case studies from customers talking about the performance improvements when migrating to Sidelink, for example Motional talking about the performance and cost savings they measured.

As Nick mentioned, this is very true - networking is usually a cost center. So when we talk about networking and how we can implement new solutions, how to prioritize that within your company, we are looking at how can we reduce cost when talking to our bosses, right?

I'll leave you with those QR codes - you can download the presentation and use the QR codes to read more. But I also invite you - we have network specialists presenting sessions across the week with hands-on workshops that are super cool.

I invite you to check them out if you want to get a taste of these new networking services that are pushing the boundaries of traditional networking like VPC Lattice or Cloud WAN. They are showing the complete consumption model as a network engineer - how would you interact with your network?

I'd really recommend getting hands on and getting dirty in these workshops. As usual, thanks a million for spending this hour with us.

In AWS we value feedback - 90% of our roadmap comes from customer feedback. So the time I spend talking to customers, I spend talking to the service teams adding your feedback. In the same way, we do it when creating content and planning sessions for the next event.

It'll take you a couple minutes, but if you can leave feedback it will help us understand what content you want to see and we will build it for the next event. Thank you!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
To add custom functions to the NX menu, you can use the NXOpen.MenuBar class in NXOpen.NET API. Here is an example code that demonstrates how to add a custom function to the NX menu: ```vb Imports System Imports NXOpen Module Module1 Sub Main() ' Get the NX session Dim theSession As Session = Session.GetSession() ' Get the UI work part Dim theUI As UI = theSession.UI Dim lw As ListingWindow = theSession.ListingWindow ' Get the menu bar Dim menuBar As MenuBar = theUI.MenuBar ' Get the File menu Dim fileMenu As Menu = menuBar.GetMenu("File") ' Add a separator to the File menu fileMenu.AddSeparator() ' Add a custom function to the File menu Dim menuItem As MenuItem = fileMenu.AddMenuItem("Custom Function", AddressOf CustomFunction) ' Show a message box when the custom function is clicked Sub CustomFunction(ByVal item As MenuItem) lw.Open() lw.WriteLine("Custom function is clicked!") lw.Close() End Sub ' Start the NX message loop to display the menu theUI.NXMessageBox.Show("Menu Example", NXMessageBox.DialogType.Information, "Click OK to display the menu") theUI.NXMessageBox.GetMessage() ' Remove the custom function from the menu fileMenu.RemoveMenuItem(menuItem) End Sub End Module ``` In the above code, we first obtain the NX session and UI work part. Then, we get the MenuBar object using `theUI.MenuBar`. Next, we retrieve the desired menu (e.g., "File") using `GetMenu()` method. We can add a separator using `AddSeparator()` method and add a custom function using `AddMenuItem()` method, specifying the function to be called when the menu item is clicked. In the example above, the `CustomFunction` is a sub that will be executed when the custom function menu item is clicked. You can customize the behavior of this function to perform your desired actions. After adding the custom function, we start the NX message loop using `theUI.NXMessageBox.Show()` and `theUI.NXMessageBox.GetMessage()` to display the menu. Finally, we remove the custom function from the menu using `RemoveMenuItem()` method. Please note that above code is just an example, and you may need to adjust it based on your specific requirements and menu structure in NX.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值