Bringing AWS to remote edge locations

All right, thank you very much for coming to the session. I know you have a lot of choices and it's towards the end of the day. So thank you once again for showing up in our talk here.

My name is Siddhartha Roy. I'm the General Manager for the AWS Hybrid Edge and Data Transfer family of services that includes Snow, Data Migration and Edge Compute Storage Gateway, the Data Sync services, and also the Transfer set of services as well.

And along with me is my colleague, Ramesh Kumar, he's the Head of Product and Solutions at Snow and he's going to be coming later in the session.

So today we are going to cover bringing AWS to the edge and specifically to the remote edge, right? So you have seen a lot of talks and great talks on what we are doing in the cloud. Many of our customers are asking us to bring extend the cloud on to the edge and we'll talk primarily about what are the services we are deploying at the edge and how our customers are building applications going to some of the use cases.

So a quick rundown for the agenda for today, we're going to press cover the continuum of AWS services, what we call the, you know, close edge, far edge. Then we'll talk about double click into some of the services like Local Zones and Wavelength. And then Ramesh is going to come up and talk to the Snow family of services and the capabilities for ute.

Yeah. So with that, let me first talk about learning from customers. So at AWS, everything starts from a customer working backwards. And when we talk to customers, they were telling us three things.

The first thing is they want to extend their infrastructure to the edge. That means they want to extend the infrastructure services, the a ps the tooling experiences to the edge so they can extend their cloud application to the edge and deploy their apps.

The second thing is the use cases driving these edge applications are 41 is low latency because some of these edge applications are deployed at the edge, they're collecting a lot of data, there's a need to react in real time and process the data.

Second thing is there's a need to migrate and modernize from your traditional own premises infrastructure.

Third, many of these applications because they're processing large amounts of data. Think about structured unstructured data. It's difficult to move the data back and forth from the edge to the cloud and they need to process it right there with local data processing and finally with data res residency many of these customers want to deploy the services and applications in their cos that needs to be in a certain country or state or a geo fence boundary location.

Now, finally, what are some of the challenges for the solutions today? So often we have heard customers telling us that current solutions today are complex and difficult to procure and manage and operate. Why? Because they are from different vendors, they don't have the same a ps the infrastructure management and tooling is all different, right?

So the value problem that we provide is coming in together as a single solution, single set of infrastructure, single set of management plan, single set of apis that extend from the cloud and easy to procure, easy to manage and deploy.

So let's cover a little bit on some of the services as we know it on your left hand side, you have the global infrastructure of regions, these are regions where we deploy uh your applications, the regions, you know and love.

Now in the first click stop, we have the metro areas and telco networks. Think about many of our customers want to deploy their apps in big metros and industry centers. And consequently, what they want is to have the compute network storage, be close to the applications in those industrial cities. I'll give some examples of that.

So some of the services there includes our cloudfront service, local zones for wavelength and the telco network builder.

One more click stop out is our on premise services. Think about outpost, uh think about storage gateway, which is one of the services i manage as well. So if you think about on premises, these are services that need to stay in a customer's co located environment. Essentially why? Because they need the command and control for primarily latency and majorly data residency reasons then further out from on premises is what we call the remote or rugged mobile disconnected edge.

That's where the snow family of services come in. There's no family of services can completely operate disconnected from region that the control plane and data plane are disconnected from region along with snow. We have a set of integrated private wireless on aws services that offers a customer, a telco private five g solution if you will could be built on snow with our third party vendors and that also operates at the remote rugged mobile edge.

And finally, we have the smart devices, which is our iot family of services, which are essentially think of them as your io end points, which are thousands of end points, you know, millions of end points which are sensory controlled if you will, which are collecting data and feeding it to the rest of the services here in the tier.

A little bit about how we think about applications. Our mission is to make sure no matter where a customer starts in the journey, we want to make it easy for customers that include developers to develop, test, deploy, operate their applications. And there are two kinds of developers here.

One is what we call cloud initiated. So these cloud initiated developers and services and applications begin in the cloud. So the first design point is this application was designed in the cloud using aws services ec two or containers or s3 for example, right. And then what's happened is over time, they figured out i need to deploy this application beyond the cloud close to the edge because a lot of my data is getting created there.

So think about maybe a private five g application initially deployed in the cloud for certain reason, then again deployed to the edge where the sensors are collecting data with the software defined radios.

The second category is the cloud enabled. These are edge applications that start not in the cloud but at the edge and they may or may not have aws api integration, but they eventually want to get cloud and take the benefit of the cloud in terms of monitoring capabilities, observ ability and other things.

Now double clicking a little bit further. I said i would talk a little bit about each of the services, the local zones. So think about local zones as these are, we are extending the cloud in all these metro and industry centers. We have 33 of these industry centers launched, we have 17 more coming up. And if you think about these areas. This is where we have deployed.

Think about it's an extension of the region here. You have your favorite services deployed except the difference from a region is these services are deployed in your city of choice there. And it really speeds up the latency to less than a single digit latency as we call it.

So what happens is your developers seeing a faster, seeing a fast low latency access if you will, you know for, for their applications, customers see a much faster access.

Now what are the common use cases categories for local zones? One is latency based. So if you think about latency based, these are applications that are highly sensitive to latency. So if you think about application types like gaming, for example, if you think about one of the locals and customers like super cell.

So if you think about the gaming needs, they need their uh applications to be deployed in certain centers where a lot of gamers are residing because they want low latency access in those regions. Another example is social media where you might take your social media app and depending on where your users reside, certain parts of the app can be in certain metros on a local zone hosted, right? Which is close to your centers.

And then the other other example is location based, location based could be multiple types where you need to keep it from a certain location for data residency, financial services isn't a great example of that or public sector because our government regulations or also in case of certain uh enterprise migration modernization, even though you want to migrate and modernize your workloads, your workloads can't leave your premises. So they need to be local.

So these are two examples, categories of examples of local zones.

Ok. So moving on from local zones to outputs. So if you recall, outpost is the next click stop in the continuum. These are deployed on premises. Ok. This could be a customer's data center or a colo location facility. And outpost comes in two flavors.

One is the big rack, you know, uh this rack basically, as you see is 42 u 80 inches tall. It's built on the same similar infrastructure as aws and has the same set of api s and tooling. And then if your needs are much smaller with the outpost, if we have the outputs server in two flavors, the two u server uh and the one u server, you know, depending on your needs.

If there is a lot of rack op, if there is space optimization needs, you can bring in the output serve instead of a full rack. So i'd like to talk a little bit about the nitro security key. If you're not familiar with nitro, we love the nitro system. It's present in every server and the nitro system effectively. What it does it enhances the security model of the output rack or server. And besides security, it also provides high performance.

Right now. One of the interesting challenges becomes when you are deploying these devices on premises, you want your data drives to be secure. And if you are managed a data center or a colo facility, you know, that's a very hard value proposition.

So if you have to manage drives, you ought to have a certain way of inventing those drives, maintaining their recycling procedures, decommissioning procedures. It's very difficult to do. If a hard drive goes bad, you got to figure out how to take that hard drive shred it, degas it and you got to do it. Not for one but thousands and thousands of dive in the data center.

So what nitro does is effectively, it externalizes all the keys for those data into this nitro device and it's kept in a chip inside the nitro device. So what happens is the keys are not on the, on the data drives. It's on this nitro device. And if there's a need or to basically invalidate all the drives because a server rack or an output server or a rack is being decommissioned, you simply basically crush the key by turning the knob. And that's how basically, you know, you render this drives useless effectively.

So you have a single place of command and control on the back of every server where you can control the drives within the server without having to inventory every drive. So that's a pretty cool feature.

Now, I'll kind of end my section here uh recapping a little bit, some of these use case categories i talked about data residency, low latency, disconnected workload that ramesh is going to talk about. And then also a little bit of private five g. Ramesh has one example there on private five g.

So i'm going to round out my my section with a couple of examples on data residency and low latency. So first example is phillips. So if we think about phillips healthcare, at the end, the business of providing critical care and critical care lacked predictive analytics to, to really facilitated what they call preventative care, right?

So here, the business problem was how do you provide the best of preventative care to your customers? So you get ahead of the problem and that meant bringing in a il on prem. Uh phillips had a disparate group of systems on premises, a lot of different storage compute systems and which all had different api s infrastructures.

So phillips worked with aws and centralized this all on outpost. What it effectively did was now between the cloud and on prem, you had the same set of a ps moving back and forth. So the same set of infrastructure, the management tooling, the same control plane, right?

So that made management easier. And because there was a single source of procurement capex was also easier to manage. And finally, because the on prem and cloud works together very well. The computer storage was on, on the outpost, but the a il was in the cloud. So that kind of worked perfectly well together and they drove to the business outcomes of delivering, you know, better care for their customers.

Another example, uh this time in financial services is the f fa or you know, first abu dhabi bank. Um the bank was in the phase of digital transformation where they needed to transform the user experience for their banking customers while maintaining their data residency uh obligations and adhering to their obligations.

Uh so what they did was in this case, they used outpost to modernize their mobile banking and payment systems and they deployed outposts. In addition, they deployed all these workloads and modern modern workloads and containers on outpost.

And because of because the container workloads need to be having high up time. They deployed these containers with the bcd r business continuance disaster recovery model with two sites out outpost deployed at both sites, one at abu dhabi and one in dubai, right?

So again, with the deployments of outpost, uh the the customer was able to save costs, have a single developed experience for the developers across the cloud and across on prem and also deliver a better accelerated and transformational customer experience on for mobile banking.

Ok. So with that, i'm going to come to the rugged mobile left section and hand off to ramesh who's going to walk us through snow.

Thanks ramesh

Thank you, sir. So let's start with what is the Rugged Edge? And the Rugged Edge simply is um edge locations where you our customers are running operations in unconditioned environments. Usually outside a data center where you've got a wide temperature range for your operations. There could be vibration based on that environment, uh dust, humidity and those kind of things.

It is also edge environments and locations where you do not have consistent network connectivity or you may be completely disconnected in air gapped environments. Or it could also be places where you need your compute and storage to be portable to be mobile, maybe in the back of a truck on a ship um or you know another moving vehicle or just uh on on the backpack of an operator.

So some of the some and in these use cases, the Snow Family is a good fit for our customers where they can run their operations, store and process data that's generated at those edge locations. Some of these rugged use cases where our customers are using Snow devices today to run their operation.

Um think about it on a on a ship where data is being collected from sensors and the data needs to be processed on the ship, either for scientific research or oil exploration. It could also be on a cruise ship or a navy ship where you're actually using the Snow devices to uh run the run the operations and workloads um for multiple use cases uh on that ship, think about factories, industry ford auto factories where you're having edge compute on the factory floor for predictive analytics or to manage smart robots, autonomous vehicles or autonomous test vehicles. These generate tens of terabytes of data per test drive from data collected from a lot of different sensors could be radar, could be liar, could be cameras and the data needs to be stored. A little bit of processing needs to be done at the edge. Whether filtering, cleaning up the data annotation and the data needs to be moved quickly um to for that customer to get that data to the data lake in AWS so that they can update their simulations train their ML models.

So the AWS Snow Family is a family of services which have secure and rugged devices. There are customers used to run their operations at those rugged edge locations that we just talked about. And over the next few slides, i'll deep dive into the services within the Snow Family. Talk about the edge compute capabilities uh for Snow, I'll give some use cases where customers are using Snow today and how they're using those capabilities. And then I'll wrap up with some real customer examples where they're using Snow in those rugged edge locations.

So at the high level, the Snow Family, there's two main services, AWS Snowcone and AWS Snowball Edge. Each of these services comes come in different configurations of compute and storage capacity. And those give you options for your different use cases where customers use Snowcone today and um Snowball Edge, they use it as as an example as an IoT hub where data coming from sensors generated at the edge are captured.

Customers can quickly do processing and analytics right at the edge location to gain insights into their data and make uh make decisions based on those insights, they can run image processing or analyze live video streams um based on those uh images and videos uh at those edge locations, they can run ML infer at the edge to make AI based decisions. I think S alluded to it before customers run all their operations and run can run all these applications independent of AWS for long periods of time.

The data plane of the AWS services and the control plane of the AWS services supported on Snow are, are run locally on the Snow device. And that's why you don't need to connect back to the region to continue your operations. Of course, most of our customers use Snow with intermittent connection to AWS where they can use it to update the software, update their applications transfer data. But uh you don't need to, you could run your operations permanently disconnected from the AWS region.

So let's dive deeper into AWS Snowball Edge and AWS Snowcone. AWS Snowball Edge is essentially a secure and rugged suitcase size device appliance that's got storage, compute networking and a select set of AWS services supported on the device. It's £49. So a single person carry, i've carried it but not very long, not for long distances. It is £45.

There are a few different configurations of the Snowball Edge device. From a compute perspective. It supports three configurations, 32 vCPU 52 up to 104 vCPU two different configurations for storage density, uh one at 28 terabytes. And then there is a newer dense Snowball Edge device that can go up to 210 terabytes of storage.

As you can see, it's got a rugged enclosure and that rugged enclosure can. And with the NVM storage on the device, you can operate a Snowball Edge, you know, in motion in places where there's vibration, think about a back of a truck on a transport aircraft on a ship and other, you know, other areas where there's vibration.

Ok. The use cases where customers use Snowball Edge today, it's used in those rugged edge locations. One example would be integrated private wireless and private 5G. So customers can run an entire private 5G core and ran on on the Snowball Edge and still have some compute power capable to doing edge compute and multi access edge compute at that edge location.

And our customers like the variation config configuration available for compute 32, 52 and 104 vCPU. So that for their applications and their workloads, they can pick the right type of a Snowball Edge device based on their deployment maybe in some areas. For example, in that private 5G, they have got a smaller cell size, a lower amount of data throughput. They can use the 32 vCPU device versus uh an area, a deployment area where they need a lot more compute for higher network throughput uh management. They can use 104 vCPU with the 210 terabyte um dense storage Snowball Edge.

We it's uh this was recently announced earlier this year and it's becoming very popular for our customers for edge storage use cases. In many use cases, you think of the rugged edge, the tactical edge, a lot of data is now being generated at those edge locations and having a dense storage device allows our customers to store that data and process that data before they move it to AWS.

Snowball Edge is also used by our defense and intelligence community customers at the tactical edge where a device such as this provides compute storage and networking for operation the back of a truck and military transport vehicles on navy ships or just at the forward military base.

And the newest Snowball Edge devices. The ones with 104 vCPU and the ones with 210 terabyte storage, all have NVM storage which gives you more durability and resiliency in the rugged edge locations also higher input output throughput performance. And so both those devices, the 104 vCPU device that was launched about a year ago. And then the 210 terabyte storage device launched earlier this year, um have turned out to be very positively received by our customers for those rugged edge use cases.

Let's let's look at AWS Snowcone. It's an ultra portable secure and rugged device weighs just uh you know, 2 2 kg or about uh less than £5 you can hold it in your hand. And um our, our customers are using a device like Snowcone um in many of those areas where they're capturing data, the rugged edge from robots from uh drones and they can make a quick uh quick, a little bit of processing of the data at the edge and then move that data to their data lake in AWS very quickly.

So it's just got two vCPUs of compute. So you're not doing a lot of mono compute, but you can use it as an IoT hub or in case of autonomous vehicles, you can do a little bit of filtering and annotation of the data that's captured from the sensors and move the data to um AWS quickly with both Snowball Edge and Snowcone.

There's two ways to move those that data to AWS. You can either when you have network connectivity, you can use AWS DataSync, just support it both on both Snowball Edge and Snowcone to move the data or the results of your analysis uh to AWS over the network or if you're always disconnected, you don't have that ability to connect to a network. Uh you can ship a Snow device back to AWS for offline data import uh into your S3 buckets in the next few slides. I'll, i'll walk through some of the key edge compute capabilities for the Snow family for both Snowcone and Snowball Edge and give you some customer use cases how customers are leveraging those capabilities.

First, i'd like to talk about security for our customers. Security is very important, especially in these rugged edge locations. They're storing data that's very sensitive for them. They're running their applications and we are taking many approaches with, with Snow to meet those customer requirements for security. From a perspective of data encryption, software, hardware, um you know, supply chain and compliancy.

Some of those approaches to help meet our customers requirements for security are listed on this slide and i'll walk through some of them. All the data that's stored on Snow devices is encrypted using a 256 bit KMS encryption key that the customer selects the encryption key itself is not stored on the device for additional added on protection.

The Snow devices are tamper evident and we use a Trusted Platform Module TPM 2.0. As you've seen from the pictures the Snow devices have, um you know, a physical enclosure, a secure physical enclosure that provides you with added protection and security.

We follow a secure supply chain where all the components on the devices are known and tracked. The devices and the services are FedRAMP High compliant and certified for FedRAMP High. The devices can support FIVE and SIX secret US secret workloads and you can order Snow devices, Snowball Edge from AWS Secret and Top Secret regions.

And even though you can operate your Snow devices at the rugged edge, permanently disconnected from AWS, we provide you the ability to update and patch all your software while you're in the field. When you're connected, of course, you can use the network to update your software. If you're not connected, then we will help you to do a side load update of all the software using OpsHub, i'll talk about OpsHub but in a later slide.

So this gives you the ability to always update your applications. There's any security vulnerability we can provide you with the patches, we will notify you so you can update your software and at the end of your workload when you return the Snow device back to us, uh we clean all the data stored on the Snow device. We um you know, clean all your workloads, all your um applications using a NIST compliant, uh secure erasure process with Direct Network Interface or DNI capabilities.

The Snow devices and Snow services support multicast streams, load balancing routing and other network functions that allow the instances on the Snow devices to directly um you know, interface with the external network and using using DNI on Snow. The instances on the Snow devices that are running on the Snow devices have a have a direct layer to networking interface to the external network without any intermediary translation or filtering.

And for our customers, what that means is that they are able to customize their network configuration. And they can also achieve very high network throughput. To use DNI on Snow, you create a direct network interface, you can um associate it with one of the physical network ports on the Snow device. And then you can attach the DNI to um one of the instances running on the Snow device can create VLAN tags for the DNI.

And you can um associate multiple DNIs with the physical network port on the Snow device. And you can uh associate um you know, multiple DNIs to instances running on the Snow device. And you can also, you know, the customers can also customize their MAC address. It gives a gives a lot of uh configuration capabilities where DNI is used by our customers. Um both in, in, in, in tactical edge as well as in telco use cases.

Um they can use DNIs to set up multiple DNIs, for instance, for the network ports, achieve very high network throughput. From a physical network perspective, Snowball Edge can support 100 gigabits per second network throughput. And we are seeing customers uh with especially in the telco use cases and even tactical edge achieving many tens of gigabits per second of uh you know throughput in their actual application.

So how do you manage these Snow devices that are deployed at the rugged edge operating at the rugged edge? And they are not connected back to uh region. You can use AWS OpsHub which is a graphical user interface or tool to manage multiple Snow devices operating at those rugged edge locations.

AWS OpsHub. The tool itself runs on a local host. Think about a laptop that's connected to the Snow devices in the same local area network and you can use OpsHub to set up and configure all the Snow devices at the rugged edge location. So think about setting up the network configuration, unlocking the devices, you can manage all your workloads, so you can manage your EC2 instances, your container workloads your Lambda functions.

You can start and stop different applications. You can reboot a device all of that using OpsHub connected to multiple Snow devices. You can monitor the resource usage. So the physical resources on the Snow device, the vCPUs used the storage that's remaining storage available. You can monitor the health of all your applications and you can use OpsHub to update the software, you can update.

Like I said, when you have connectivity, you can update your software over the network or when you're not connected, you can still use uh uh and use OpsHub to update the software on the Snow devices. And that could be you updating your applications running on the Snow device, updating the Snow software stack itself.

Essentially what you would do when you're not connected is you would download the updates from the console onto that local host that has OpsHub and bring that local host to the Snow devices. And then you can update your software on the Snow devices. And we're adding a lot more capabilities in OpsHub to help you manage more intuitively, a large a large deployment of multiple devices.

We'll provide you with notifications when software updates and patch updates are available. So you can make those updates. We OpsHub will also let you know when you need to update your certs. So all of that information will be provided on that local host running OpsHub talk about customers running applications, uh image processing ML inference data analytics on on uh Snow devices.

So the Snow devices support a select set of AWS services for compute analytics, containers, storage, access, security, ML inference and data transfer. And when we say an AWS service is supported on Snow, it means both that control plane and data plane are supported locally on the Snow device, which is what allows you to run your operations disconnected from the AWS region.

Each service is just a subset of the APIs supported on the Snow, Snow device, not the full set API set, but for the APIs that are supported, it is API compatible with the service and region. And this is what would allow our customers to say, build their VMs on EC2 in region and deployed on an EC2 compatible instance on the Snow device.

Similarly, you can take your containers, build it on EKS in region and deploy your containers on EKS Anywhere running on the Snow device

For serverless applications, you can develop your IO functions or Lambda functions in region and deploy them on the Snow device for ML type use cases. You can train your ML model on SageMaker in region and then we've got SageMaker Edge that's supported on the Snow device. So then you can run ML inference at the edge.

Using SageMaker storage is supported with EBS API compatibility for block storage and S3 compatible APIs for object storage. And in a lot of those use cases at the edge, the customers are capturing data, processing the data at the edge location because they want to gain insights quickly, make decisions quickly. But once that's done, there may be a need to move that data to AWS to the customer's data lake for either additional processing or maybe just move the data to AWS for archiving or in some cases, customers just move the results of their analytics to AWS. And that's where Data Sync will help you. When you have network connectivity either consistently or intermittently, you can use AWS Data Sync to move data or the results of your computation or analytics to AWS.

So we talked about the different workloads that you can run on AWS. So how do you get your applications onto the Snow device? For your virtual machines that maybe they're developed at the edge, not in AWS, the way to bring that on the Snow devices is there's two approaches you can follow:

  1. You take your virtual machines and you create an Amazon AMI running on EC2 in region. And we provide you the equivalent EC2 instances between the Snow device and region. For example, the instances supported on Snowball Edge are equivalent to the M5a instance in region. So once you've developed your application as an AMI in EC2, and it's validated, when you order a Snow device, you can request for those AMIs to be installed on the Snow device and we would install it and ship it to you.

  2. Another approach is you can update your application in region or bring new applications to your Snow device from region. And in this case, you're just effectively importing a VM image onto the Snow device in region or in field at the edge. And you could use OpsHub to make that, make your life easy to import that virtual machine onto the Snow device.

So that's what I talked about - basic compute applications running as virtual machines on Snow. We're hearing from our customers more and more that they want to run container applications at the edge in many industries like telco, retail, automotive and even in defense with given the DoD initiative with Platform One. We're seeing our customers modernize their application by transitioning to a DevOps approach and transitioning their applications to a containerized approach.

So on the Snow devices, of course, you can support containers using EKS Distro or you can support it using an open source Kubernetes approach to make it simple and easy for you. We do support EKS Anywhere on the Snow devices and you can use EKS Anywhere to set up, create containers. EKS Anywhere provides you the capability to set up multi-node clusters across many Snow devices. And you know that way you get high availability and fault tolerance.

We're also working with our partners, think about partners like Rancher and Red Hat who've got multi-node container applications and management use cases both in region and now supported on the Snow device. So if you'd like to use those partner applications to manage your containers, that's an option available to you as well.

One of the new use cases we're seeing this year is we're seeing our customers request for the ability to support large amounts of data being stored at the edge - many petabytes of data. The data is being generated from sensors. The data needs to be collected, the data needs to be processed and analyzed at those edge locations so that customers can gain insights and make decisions in near real time without having to move that data to AWS. Eventually, that data may move to AWS to be for further analysis and for archive purposes.

So earlier this year, we announced S3 compatible storage on the Snow devices that can support clustered edge storage going from 3 to 16 nodes. And for those edge storage use cases, if you use the dense Snowball Edge storage with 210 terabytes, you can see how going up to 16 nodes, you can support many petabytes of data at the edge location.

The S3 compatible service is designed for durability and it's redundantly stores data so that you can recover from disk failures on the Snow device or a device failure in a multi-node cluster. The service is of course compatible with the S3 API in region and same bucket management. So it makes it easy for you to take your applications, your data management applications that use S3 in region and use that on the Snow devices at the edge.

And I kind of touched upon this earlier with AWS Data Sync supported on the Snow device - you have that ability to that when you have connectivity, you can move data between region and the Snow device at the edge, either you can import data from the Snow device from the edge to AWS or export data down from AWS region to the Snow device. And based on your rugged edge location, if you don't have terrestrial network connectivity, we offer a few different approaches with our partners.

For example, with satellite connection, we are working with partners like Starlink and Viasat. So you can have a satellite connection between the Snow device deployed at the edge back to AWS region. You can use private 5G or another mechanism to move that data between the Snow devices and region. And you know, some customers also keep additional Snow devices with them on rotation. So one approach they could follow is once the Snow device is full of data to 10 terabytes or 80 terabytes, you can physically ship the Snow device back to AWS for that data to be copied, while the next set of devices may be collecting and processing the next set of data that's generated at those edge locations.

For this next section, we'll just focus on some of the new capabilities we've launched this year for our public sector customers, especially for tactical edge, for defense and intelligence community use cases.

Earlier this year, we announced a new Snow service called AWS Snowcone Blade, which is as of today, just available to our defense customers as part of the JWCC contract. And the AWS Snowcone Blade is kind of like a toolbox size device - very dense in compute. It's got up to 208 vCPUs of compute. It's the densest compute for a Snow device. It's also rack server, rack form factor - short depth 17 inch server rack form factor. It's the first Snow device that's in that server rack form factor.

The reason we took this approach is we're seeing a lot of our customers use Snow devices in isolated data centers. So they have server infrastructure, but because they're running their operations in an isolated data center, they can't connect back to us. So they're looking for a device such as a Snowcone Blade. Many other use cases with large military trucks, transport aircraft, navy ships - they have that capability to support those short depth rugged rack form factor devices.

Snowcone Blade also has an optional enclosure - the graphic here you see on the slide shows it with that optional enclosure - which is MIL standard 810H certified. So now when you're operating Snowcone Blade outside of that isolated data center, maybe at the back of a truck, maybe at the forward military base, you can use that MIL standard 810H enclosure to provide additional ruggedization and protection. And just given the nature of the device, what we're seeing a lot more use cases where more complex workloads are being used by our customers at the edge - like AI inference that needs a lot more compute. And that's where a 208 vCPU type device comes in use for our customers.

It talked about in the previous slide about a use case where customers are looking to store multiple petabytes of data at the edge location using multiple Snow devices. We're increasingly seeing our customers run more complex workloads, needing multi-node containerization, multiple Snow devices to run those complex workloads or multiple Snow devices operating in a cluster to store many petabytes of data.

So we are offering our customers a full rack integrated solution with those Snow devices - could be Snowball Edge with 1 or 4 vCPU, Snowball Edge with 210 terabytes of storage or Snowcone Blade. Each rack level solution can support 6 Snow devices. The rack supports redundant top of rack switches. We will set up all the redundant cabling. We'll set up the L2/L3 network configuration. It provides some redundant power modules, a firewall and switch management.

And when you come and order from us a full rack solution, we'll integrate it with all the 6 provisioned Snow devices, set up the cabling, network configuration, ship the fully integrated solution to your edge location and help you with deployment. Many of our customers are also looking to deploy multiple of these racks - just given the nature of their application, sometimes you'll have a few racks of compute nodes, then a few racks for disaggregated storage. And we will help you set that up with the Data Center Gateway and a spine redundant spine switch connection. And in many cases, customers are using that multi-node containerization like I mentioned to set up their compute workloads or with 6 dense Snowball Edge devices with that S3 compatible storage that we talked about earlier - you can do 1 petabyte of storage for each one of these rack solutions.

Take it a step further beyond rack level integration and solution - earlier this year, we announced AWS Modular Data Center or MDC to make it very simple for our customers and cost effective for them to deploy an AWS managed data center anywhere they need it. So if you think about it for defense customers, especially in outside the US or OCONUS locations where they don't have infrastructure or if they were to set up infrastructure, they're constrained by the amount of compute and storage and networking they can set up or they may have to spend 2 years setting up a new data center. Using an AWS MDC, the customers can quickly deploy many thousands of compute cores and the ability to store many petabytes of data.

The AWS MDC is a secure environmentally controlled physical enclosure that can support up to 6 Outposts racks or 6 racks of the Snow devices - the similar solution that you just saw in the previous slide, you can mix and match, you can have some Outpost racks and some racks of Snow devices as well. And if you need even more compute, you have the ability to scale up and deploy multiple MDCs at those edge locations.

Customers only need to provide is power and a network connection - if you're deploying Outpost, Outpost needs that network connection. And as you can see from the graphic, the MDC does support the ability to have satellite communication. We recently announced SES as one of our first partners who will support satellite connectivity to the MDC. And the MDC itself is easy to transport - it comes as two modules of 20 ft containers so you can move it on a trailer truck, on a military transport aircraft or on a navy ship to bring it to those edge locations.

I'm going to wrap up with a few customer examples. The first one is AWS Disaster Recovery team collaborating with our partner and customer HealthDotNGO to really bring the capabilities of Snow, the disconnected operation to help first responders in disaster recovery areas where there's maybe an earthquake or a tornado or hurricane. And therefore there is no consistent power supply or network connectivity.

So what you see in this truck is there's Snowball Edge devices deployed. HealthDotNGO has developed a lot of software for image analysis, video analytics and ML inference that's been loaded onto these Snow devices. So we send drones in the impacted areas - the images and videos from the drones are analyzed on the Snow devices to provide first responders with information on where they should provide their assistance.

You can also set up - we also set up an IoT Hub with sensors on the first responders for safety and tracking and location. It also supports encrypted communication using Wickr. You can see the disaster recovery truck or jeep at the Venetian. So please go take a look and there's also on the Casanova floors on the first floor in the Venetian - you'll see a demonstration with Wickr on Snowcone Blade and Snowball Edge.

Just recently, we collaborated with NATO's Allied Command Transformation team and they showcased how you can create new hyperscale technology and combine that with Snow devices for operation at the rugged edge. And effectively what the NATO Allied Command Transformation team did is they combined next generation communication technology with AWS Snowball Edge to allow the warfighter to capture, analyze quickly, gain insights and make decisions at the edge. It also helped improve the ability for NATO to collaborate with their multinational force.

Another example of operation at the rugged edge with edge data analytics and also a full private 5G network running on the Snow device is with the Norwegian Army where they created a Network on Wheels. So here you have a Snowball Edge with 104 vCPU running a full private 5G network core developed by our partners. And it still has additional compute capability to do edge compute such as video analytics and image analytics. And you can see that the Snowball Edges with that private 5G and edge compute are deployed on that mobile trailer. So that can be pulled into a disaster recovery area or to a forward military base so that you can do very quickly in an area where there is no consistent power or network connectivity - you can capture data from sensors, from drones, and do analytics, make decisions to support the mission and send data back to AWS.

Alright, so that about wraps up our presentation. We got about a few minutes - I'll invite Sid to come back up on stage and be happy to answer any questions you have. And of course after this presentation as well, we'll be here, you can see our emails - feel free to reach out to us later today or send us an email with any questions you have and please complete the session survey as well.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值