AWS Graviton: The best price performance for your AWS workloads

All right, good morning, everyone. Welcome to Friday morning of re:Invent. This is CMP334. We'll be going over AWS Graviton and how it delivers the best price performance for your workloads on AWS. And we'll dive deeper into Graviton 4, our fourth generation Graviton processor which we announced earlier this week.

My name is Stephanie Shyu and I'm the Global Head of Go-to-Market and Business Development for Graviton. Presenting with me today is Sudhir Raman, Senior Manager for EC2 Product Management and Ali Saidi, who's a Senior Principal Engineer and Lead Engineer for Graviton.

In this session, we'll first cover a brief history of Graviton and the silicon innovation that has enabled consistent price performance improvements with each generational launch. I'll then turn it over to Ali who's gonna deep dive into Graviton 4 architecture and workload performance. Lastly, Sudhir will cover software support for Graviton and best practices for transitioning your workloads.

So AWS has been investing in building custom chips for several years. This journey started back in 2013 with the AWS Nitro System. We partnered with Annapurna Labs who later joined AWS to address the challenge of increasing hypervisor resource utilization on the host CPU of EC2 instances. With Annapurna, we developed special purpose chips that became known as the Nitro cards that offloaded functionality included storage and networking to dedicated hardware and firmware. This has allowed us to maximize resource efficiency and improve security for all EC2 instances.

Since 2017, we then launched the AWS Graviton processor, which is our general purpose CPU, which we'll talk about at length today. And most recently, we've developed chips that are purpose built for machine learning acceleration, AWS Inferentia which addresses deep learning inference and generative AI applications and AWS Trainium for high performance scale out training requirements.

And there are many reasons why we've invested in building custom silicon. The first reason is it gives us the ability to specialize and build chips that are designed specifically for our use cases at AWS. And this allows us to focus on the feature sets that add the most value. Unlike third party providers who are developing and building chips to meet the requirements for as many different types of customers as possible, we can focus on the requirements of AWS customers. And this allows us to avoid over complexity in the chip design and allows us to focus on optimizing for power and efficiency and this enables us to bring unique performance and cost points for our customers.

The second is it allows us to move faster. So by owning the end to end development process, and having the teams that are designing the product building the silicon, the servers that they go into and the software teams all under one roof enables us to run processes in parallel and minimize handoffs. So this allows us to maintain a rapid pace of innovation. We control when our project starts and our schedule and delivery timelines and we can use the scale and elasticity of EC2 to burst into the compute we need for the simulations and get those done quickly.

So having all the teams under one roof not only allows us to keep up a fast pace in terms of bringing new generations of chips to market, but also allows us to innovate more and create more value for our customers. So instead of optimizing each component in a silo, we can look across the stack holistically and make decisions across where we wanna optimize whether it's the hardware that's powering the server or down to the silicon itself.

And lastly, it enables us to provide better security for our customers. So Nitro provides us with a mechanism to improve the security of our servers by providing hardware root of trust, verification of the firmware that's running on it and limiting host interactions through a narrow set of APIs.

Let's dive a little deeper into AWS Graviton history. So we launched Graviton, the original Graviton processor and re:Invent 2018 and we brought it to market as the EC2 A1 instance. The A1 instance introduced ARM as an architecture in the cloud for the first time and it demonstrated that applications could achieve the same on-demand compute access and elasticity on Graviton as any other EC2 instance.

Since then, we've launched three additional generations of Graviton in just four years up to and including Graviton 4, which we announced earlier this week. And you can see how these chips have evolved over time. So with each iteration of Graviton, we've increased the number of transistors, we've increased the memory bandwidth. We've added more cores and more speed to those cores. And we've pushed the envelope in terms of bringing big improvements in performance and efficiency.

Today, AWS has built more than 2 million Graviton processors and I'm gonna spend a little more time talking about Graviton 2 and Graviton 3 and how our customers are using them before turning it over to Ali to get into Graviton 4.

So with Graviton 2, we delivered a seven fold increase in performance over the original Graviton processor and that was just one year after launch, Graviton 2 provides up to 40% better price performance for a wide variety of workloads versus comparable instance types. And these include workloads across databases, big data analytics, web servers, video encoding and many more.

We have 12 different Graviton 2 based EC2 instances today. So customers have a wide variety of choice across memory storage networking, as well as GPU attached instances to support virtually any type of workload in the cloud.

Graviton 3, which launched two years later, we doubled the number of transistors and we increased performance by 25% and we delivered up to two x higher floating point performance. This was also the first instance that featured DDR5 memory increased memory bandwidth by 50%.

And for AI/ML workloads that can run on CPUs, Graviton 3 introduced key architectural improvements like 2x the vector width and support for bfloat16 instructions combined with increasing optimizations for ARM64 on TensorFlow and PyTorch. CPU based AI/ML workloads can now achieve up to 3x better performance on Graviton 3 versus Graviton 2.

And we have eight Graviton 3 based EC2 instances today across the Compute Optimized, General Purpose and Memory Optimized families as well as enhanced networking with C7g.n, offering up to 200 gigabits network bandwidth and HPC7g which is purpose built for high performance computing workloads empowered by Graviton 3 and this is delivering up to 35% higher vector instruction performance.

So across Graviton 2 and Graviton 3, we now have over 150 different EC2 instance types and you can find Graviton across all 32 AWS regions today.

So today we have over 50,000 customers that are using Graviton to realize price performance benefits. And these include customers of all sizes from early startups to some of the largest and most complex enterprises. And they span across all geographies and industry verticals including software, ad tech, financial services and media and entertainment.

So we have customers like Splunk who are running their indexing and search engine workloads for event data on Graviton and companies like Epic who are running Fortnite and the Unreal game engine on Graviton. Regardless of the workload they choose to run on Graviton, customers are achieving big improvements in terms of performance and cost efficiency.

For example, NextRoll, a marketing and data technology company uses Graviton 3 based instances for their ad servers and ElastiCache clusters. They were able to achieve 15% more requests per instance and up to 40% better latency over Graviton 2 based instances.

You also have Sprinklr which provides a unified customer experience management platform. They saw 27% better performance which allowed them to deliver results to their customers faster, improving their experience while also optimizing their costs.

And lastly, we have Stripe which provides online and in person payment processing and financial solutions. They migrated their ETL workloads over to Graviton and achieved a 50% improvement in query performance while reducing their error rates by 10 to 15%.

And it's not just external customers who are adopting Graviton. We also use Graviton servers internally to power a number of mission critical workloads. For example, Graviton servers have powered Amazon Prime Day events going back to 2021 providing core key retail services with scale up to tens of millions of normalized Graviton instances and cost efficiencies.

So it's not just cost savings and performance, that's motivating customers to move to Graviton more and more. We're seeing customers consider the compute carbon footprint of their compute infrastructure. And they're thinking about ways to reduce their overall carbon emissions. And since the greenest energy is the energy that we don't use, customers can improve the sustainability of their workloads by reducing resource consumption and using more energy efficient instances.

Graviton not only delivers the best price performance but also delivers the best performance per watt of energy on EC2. So what this means is that for the same workload, Graviton uses up to 60% less energy over other processors and this is enabling customers to meet vital carbon emissions reduction targets without having to sacrifice on performance or cost.

A great example here is Snowflake which provides a fully managed cloud based data storage and analytics platform and they aspire to provide the best data cloud with the lowest carbon footprint on the planet and to support their sustainability goals, they seamlessly transitioned their virtual warehouses from x86 based instances over to instances based on Graviton processors. And with this, they were able to optimize their cloud environment for cost and performance. And they also achieved a 57% lower workload carbon intensity while improving performance by 10%.

Another great example that's not on the slide here is Adobe. They recently migrated their Ad Cloud over to Graviton. And with that, they were able to increase the number of auctions per CPU by close to 20% over the previous solution. And they were able to reduce their overall carbon emissions by 41%.

We also have ARM who's a leading technology provider of processor IP. They recently reported a 67% reduction in their workload carbon intensity for their chip design simulation workloads that were running on Graviton 3.

So while Graviton 3 has already delivered, and Graviton 2 has already delivered so many benefits, we continue to see customers asking us for more performance and energy efficiency as they bring more workloads to the cloud, transform their businesses and deliver more value to their customers.

Earlier this week, Adam announced Graviton 4, our fourth generation Graviton processor and the preview of R8g the first Graviton 4 based EC2 instance. R8g brings up to 30% better performance when compared to Graviton 3 based R7g instances and they bring up to 3x more vCPUs and 3 times the memory.

R8g instances are ideal for memory intensive workloads like databases and real time big data analytics. R8g instances are available in preview now and you can sign up for the preview on the R8g instance product page.

With that, I would like to turn it over to Ali to deep dive into Graviton 4 architecture.

We had customers who had moved all their databases to Graviton and they said this is great. I'm using eight XL. I've got 32 cores I see in a year or two. I'm gonna have to scale that up to 64 cores. But then what can I stay on Graviton or not? And really we have two answers today for them. Yes, you absolutely can. And we're, we're letting you scale up in two different ways.

The first is every socket has 50% more cores and each of those cores go about 30% faster. So that's by itself a massive improvement in the in the amount of processing that can be done in a single socket. But for the first time, we've also added coherent multi socket support to our systems. So we can take two Graviton four processors. They can be a single system image with up to three x the number of cores and three x the amount of drm bandwidth as sorry, not bandwidth, the amount of drm capacity as our previous r seven g instances.

With Graviton two, we introduced encrypted dri we carried that forward for Graviton three with Graviton four. We're expanding that the coherent links i just mentioned in the previous slide. Those were also encrypted and additionally, the connection between our nitro card and the host that pc i link is encrypted as well.

Stephanie mentioned earlier in the, in the slide about specialization and innovation and being to innovate across boundaries. One example of this is when we developed the nitro card for this system, we wanted to be able to run this in several different modes. This server with one nitro card can be two non coherent virtual systems up to up to 24 x larges. It can be at that coherent 2448 x large system 192 v cps. In one or, or metal instances, two metal instances or one metal instances, you might say, why, why do we have these different operating modes? And it's really simple. A lot of people aren't going to need the 192 cores systems. And so when they're not using that, we can turn off that link and save some additional power.

We developed Graviton four, we paid close attention to real workloads looking how they performed. And when we're collaborating with arm and the cord used for Graviton four, we wanted to understand what we were optimizing for and how we could do better. And to do that, we visualized a lot of workloads and radar plots that look like this. You can think of a single plot as giving you a holistic view of a workloads characteristics and each axis corresponds to a different chip design trait with the value corresponding to the sensitivity that workload has to a specific trait.

And processors can really be thought of as a front end and a back end on the front end, you have the portion of the processor that gets instructions and its performance is influenced by factors like the number of branches, the branch targets and where those instructions are coming from the l one cache, the l two cache, last level cache or main memory, the mac half of the processor, it's what executes instructions. It has the adders, the loaders, the multipliers and these are sensitive to properties like where the data is in the l one or l two cache, the instruction window tl b misses and other things, these ultimately result in back installs.

And so we can take a workload and we can visualize it on this plot and we can change things while we're developing the processor to see. Well, how do, how do those metrics change? Um and so here is smaller is better. So what kind of graph like this could teach us? Well, if we look at a traditional benchmark, you see, we'd say, well, we need to make sure we are optimizing the backend entirely. The front end isn't even, isn't really even used. And the reason for that is, I mean, it kind of makes sense when people build benchmarks, they typically take a big application, they extract the what they think is the interesting portion of it. They put it in a little workload and we loop across that a bazillion times. So it doesn't really tend to touch the front end of the processor. It's just actually the same instructions over and over again. And so here in this benchmark, while we see sensitivity on the right hand side of the graph, we don't see a lot anywhere else.

Well, what about workloads that we think customers run in production like databases like Cassandra or uh web apps written in Groovy or web servers, running EngineX, they're pretty different pictures. And so you can see here that things like the branch predictor matters a lot more. The branch, the branch target buffers matter more where the instructions are coming from, do a lot of them were front end stalls. And so with Graviton four, we made a number of improvements to improve those workloads.

The first i mentioned before, it's that, that l two cache being two x larger, it fits more instructions, it fits more data closer to the core. The second is there's particular improvements in the Neoverse v two corps to predict branches better and to allow the processor to get more instructions to execute. And lastly, we've adopted rv nine with Graviton four that includes sv two. So saturating arithmetic, widening and narrowing and other instructions for media encoding and other workloads as well as some additional control flow integrity instructions. So branch target identification, in particular, Amazon Linux 2023 has support for all these control flow instructions and allows you to find bugs and issues in your code more easily.

So we've talked about Graviton four. But how does it perform here? I've got a number of workloads that we've measured. And I'm comparing a Graviton three c seven g sorry r seven g to a Graviton four based rhg. In all these cases, we're using the same number of vcp us eight. And here we are running HammerDB. HammerDB is an open source slowed generator for a database. It's meant to mimic a company that sells items, keeps stock receives orders, processes, payments, delivers those orders, et cetera. And you typically measure new orders per minute with this workload.

So ho how many orders can you process in this database? So we loaded a Graviton three and a Graviton four system with the same load generator configured the same with 32 virtual users and 50 warehouses. And you can see a 40% increase in the performance of the processor. This is a very similar number to what Aurora has seen when they have also measured rg gen x is a popular web server. It can also be a load balancer. And here we use the open source load um tester called work to generate 512 clients worth of htps requests through EngineX as a load balancer to a bunch of back end web servers.

We kept the load generator and those web servers constant and again, tried a Graviton three or a Graviton four based system and you can see around a 30% increase in performance on this workload. Grails is an open source web application framework that's built on top of Groovy and it runs on aj vm. And we found this to be more representative of um typical java workloads than the benchmarks people usually reach for here too. We're comparing a Graviton three to a Graph on four based system and you can see around a 45% increase in performance on this workload. It's just a massive jump over jump going from generation to generation.

And the last one i have for you today is Redis. Um Redis is a public key value store. People use key value stores to offload their databases to quickly look up data um to improve responsiveness for their applications. Here, we're using 16 byte keys and 100 and 28 byte values. And we have three load generators, two are generating load and one's measuring latency because latency is a very important component of this, this workload. And here you see 25% higher performance on Redis.

Now, if we zoom out and look at generation over generation, how we've been doing when we start with the a one instance and look at the r six g based on Graviton two, the r seven g based on Graviton three and now the rag based on Graviton four from 2018 to now in those five years. And those four generations we see around four x increase in performance depending on the workload, sometimes a bit more, sometimes a bit less, but these are just huge gains in five years.

So with that, we have our Graviton based um e two instances there have industry leading performance, the scalability of three x larger sizes. The first systems in our data center are ddr 5 50,600 the best price performance and also substantial gains in energy efficiency. Now, of course, i presented you with our measurements of these. But it's much more interesting to see what our customers have to say about it.

The first customer is Datadog. Datadog is an observable and security platform. They um have tens of thousands of ec2 instances and today, around 50% of those run on Graviton, they report that it was seamless to move to, to Graviton four and they saw an immediate performance boost epic games as the maker of Fortnite. And they said that the Graviton four instances are the fastest, these two instances we've ever tested and look forward to how Graviton four will improve the Fortnite experience.

And Lassie Honeycomb Honeycomb isom is are a platform that enables engineering teams to find and solve problems they couldn't before. And when they tested Graviton four, they saw a 25% increase in performance, effectively fewer instances at the same time seeing a 20% improvement in median latency and a 10% improvement in p 99 latency.

So with that, I'm going to turn it over to Ser to talk about the software on Graviton. Thank you Ali for those great insights on Gravid on four. So I want to shift gears a little bit and talk to you more about um software and how you can get started with moving your workloads to Graviton and some of the best practices that we've assembled for different languages and applications.

So this is one of the most common questions that we at AWS get asked in many customer conversations is around. Hey, what does the software ecosystem for Graviton and ARM 64 look like? And how do I get started? Like, what are the right workloads and, and what are some of the best practices that we have seen from other customers and what we've learned and that we can replicate across workloads.

So let's start with software first and with operating systems. Um the support for Graviton and 64 across the linux world is pretty strong. So that includes all the commercial uh linux operating systems including Amazon Linux two and Amazon Linux 2023. It also includes Ubuntu, Red Hat Enterprise Linux as well as Suzy Linux Enterprise Server. It's a very similar story across all the different community distributions as well where you actually have a ready to use out of the box ARM 64 army or Amazon Machine Image available.

Let's talk about AWS tools and software. Um so pretty much all the tools and software that you have come to rely on for your infrastructure. Um those all also support Graviton and I 64 today. These include things like the entire AWS code suite that allows our customers to build and test. It includes capabilities like Auto Scaling groups where you can also use mixed architectures and support both x 86 and ARM 64.

Um it also includes capabilities or, or software like Amazon Cordova, um our open source distribution of open JD k. Um there are services like AWS batch as well as the AWS marketplace with thousands of software listings where you can also find on 64 equivalents of those software moving on to some of the other third party is v software.

Um what you can see here is firstly, databases, databases is a first class citizen on Graviton. So pretty much all the popular databases that you typically use. Um you can find that they are supported

Um there's a graviton version for those and a similar story with the dev ops, right, all the major c id providers that allow you to build and test um and and deploy those are supported with graviton. And there are a number of configurability, monitoring, security logging automation software, all of that available.

This is a partial list and there's more coming up as part of our grab it already partners program. So this is a program that we launched a couple of years ago to really continue to fuel the momentum that we're seeing in the iv software ecosystem for uh for gravid on support. And what what these partners do is they have validated software offerings that are ready to work with graviton, right? So these are also validated through our solutions architects team and this ensures that you have an easy on ramp to picking up software that's already been tested and to be working on. So this is a fast growing list and we have a full list published on a website that i encourage you to refer to talking about databases.

We've had a really exciting announcement that happened just earlier this month with the sap hanna cloud. Um so for context, sap hanna cloud is a fully managed in memory cloud as a database service. And sap hanna cloud and s asap and aws have been collaborating over the last several months to actually port han a cloud to graviton and just earlier this month, that's now generally available. And what a realized is up to 35% better price performance for analytical workloads after they transition to graviton and the energy efficiency and sustainability.

Part of it has been a big focus for s a as well and they have estimated a 45% reduction in compute carbon footprint after transitioning to graviton instances. So while you can obviously run graviton directly on ec two, we have also extended the performance and price performance benefits of graviton to a variety of managed services within aws. So if you look across all the major databases, analytics compute as well as machine learning aws manage services, those support, grab it on and give customers significant price performance benefits.

A most recent addition to this list was, is, is also the amazon msk service which is the managed streaming for apache kafka and they announced addition of graviton three support just earlier this week at agreement. So transitioning to grab it on. So let's also talk about some of the best practices that we've assembled on how you can get your workloads up and running on on 64 and grab it on.

So firstly, in terms of candidates, all linux workloads are typically a really good starting point. And as a general rule, what we suggest is the more current of software stack is the better just because it gives you a much better starting point in terms of getting the best performance out of the system. So if you're running interpreted or compiled by code languages, really no modification required. So that includes things like python, java, ruby php and more, if you're running compiled applications like c or c++ or go, um those will need to be recompiled for arm 64 and all the major compilers will allow you to do that.

And um if you have any intrinsics or you relying on any assembly language, those will be need, those will need to be ported to arm 64. But otherwise, it should be a pretty much a straightforward rec compilation army as far as amazon mission images go for all the linux distributions that we just covered. There are armies available out of the box that you can simply deploy and ready to use a slightly different view of what we saw in the previous slide because we get asked by a lot of our customers around. Hey, can we also grade this around the ease of adoption in terms of what could be low hanging fruit, right?

So pretty much virtually no effort to things being super easy is the manage service on ramp because a lot of the architectural changes are handled under the hood by the manage service. So typically if you use an aws manage service that supports graviton, just upgrade to a version that's supported by that service and you will be able to enjoy the price performance savings right away. And in most cases, it's a, it's a simple instant switch.

We talked about the um the interpreted as well as the compile languages. And then if you're running things like microsoft windows.net, then you know, there's a couple of steps there where you can move to.net linux core and then move over to graviton containers. Just want to take a minute to talk about containers because we see a lot of customers today running containerized microservices on graviton based instances.

So overall support for arm 64 and graviton in the container ecosystem is really strong and that includes our own manage services with ecs and eks as well as docker. There's also support with all the multiple image registries where they support multi architecture images with both x 86 and arm 64 available. And there's also support for some of the newer container technologies such as bottle rocket and flat car in terms of what you need to do. And some steps you need to think through for moving containerized workloads.

Um the container images are architecture specific. So you would need arm 64 images for the containers. Now, the good news is that the majority of the popular software in the container ecosystem. you can only find images available in the in the registries of your choice. Um some examples are listed over here and there's a more detailed documentation in our in our technical guide.

Um and, and then in terms of developers also have the option of building their own images and that can be done through either native compilation, cross compilation or doer build. x. One thing to note here is with the multi architecture support with the various registries. this means that through manifests the right image is automatically deployed based on the host architecture. so that makes your life really easy when it comes to deploying containerized workloads.

Java is also super popular with a lot of our customers. Um no need to recompile and it's generally performed out of the box with m 64. We do have jk's, there's multiple sources where you can get them. But what we recommend is if you have a choice um amazon corrido, it gives you the fastest part to getting access to a lot of the performance optimizations that aws is making.

Um so if you have a choice, you know, we would recommend corridor in terms of versions. Just a couple of things to keep in mind is that java eight and newer is typically supported. We have been getting feedback that a lot of customers have transitioned to java 11 and newer to get the better performance out of the system.

I'll walk through a couple more applications and there is much more detail in our in our technical guide here. Um so with python, it's an interpreted language, you can easily install python packages using pip install. And you know, you can use an up to date version of the pip version by using the command shown below.

Aws has been actively working to make sure that most of the popular packages are pre compiled and available for graviton. And there's more than 200 plus of the popular packages that are already available today. And you can see the status of those in our python tester. Just one caveat that i wanted to call out is while we have a lot of pre compiled packages and that list is ever growing in some cases where there is not a package pre compiled available, then people will try to download from source and build it, which can take a little bit of a longer time as you're trying to install that package. So something to keep out. Keep keep in mind as you go through that

Machine learning again, top of the mind for a lot of organizations today cpu based machine learning, um grab it down as a gate great candidate to actually run cpu based machine learning workloads. Um there are optimizations available across most of the popular frameworks and when it comes to p arch and tensor flow, there are a few different levels of software abstractions that are available, namely there is the aws deep learning container that essentially delivers the best performance and comes with all the packages pre installed.

You also have the option of python wheels that gives you a path to getting easiest way to get some of the release features. And then there's also docker hub images that has some downstream experimental features available. Micro architecture optimizations for key machine learning primitives are also provided by the arm compute library.

So that combined with all the graviton three and newer architectural improvements with bloat 16 instruction support essentially makes you know overall graviton pretty interesting for cp based machine learning workloads.

We spoke a little bit about c c++ as as a language. It's compiled, it's low level. So you would need to recompile it, use a recent compiler if you're using any assembly language or if you have any intrinsics or you're relying on instructions like abx, then those will need to be ported and we have the sim d that's available that can give you a good starting point to do that.

Um so graviton also supports a large system extensions or lc. So that helps with low cost atomic operations and can improve system throughput for things like locks and the texas. And there's an optimized lipsy available in many of the linux distributions

Php pretty straightforward, fully supported on linux. There's no need to recompile versions. We recommend moving to at least php 7.4 or newer and we found up to 37% better performance when we've gone to 7.4 as opposed to php 7.3 and then php eight comes with even more enhancements and more performance improvements.

Just find a quick word on go. Uh for folks that might be looking at using um go on graviton. Um it's available with out of the box arm 64 support and use the latest go compiler and tool chain to improve arm 64 performance. You actually put out a blog out there that shows how there is 20% higher performance that can be gained by using go 1.18 as and newer versions.

And then there's many more applications than we have time for. But all of that is documented on our github technical guide. So that kind of brings us to the end of our presentation today. And i want to quickly summarize the key takeaways in terms of grab it on providing the best price performance in ec two for a broad array of workloads.

Grab it on four that we just announced a couple of days ago. Agreement is the most powerful and the most energy efficient chip that we built in aws so far. And it comes to ec two in the form of r powering our eighth generation ec two instances that will deliver the best price performance for memory intensive workloads. And that will be the first of many other g four powered instances that will be coming up.

So the r instance is available in preview today. If you go to the r a web page, if you want to get access to the preview, please feel free to sign up and that will go through, get routed to our teams and we will be able to enable access.

So thank you so much for taking time out and attending the session. We really appreciate it and i hope you all had, had a great re invent so far and have a safe trip back home.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
AWS报错"The request signature we calculated does not match the signature you provided"通常是由于请求签名不正确导致的。签名是用于验证请求的一种机制,如果签名不正确,则AWS无法验证请求的真实性并拒绝该请求。以下是一些可能导致此错误的原因以及相应的解决方法: 1. 检查AWS密钥是否正确。确保您使用的是正确的AWS密钥,并且密钥没有过期或被撤销。 2. 检查请求的时间戳是否正确。AWS要求请求的时间戳必须与AWS服务器的时间戳相差不超过15分钟。如果时间戳不正确,则需要重新生成签名。 3. 检查请求的HTTP方法是否正确。AWS要求请求的HTTP方法必须与您在签名中指定的HTTP方法相同。如果HTTP方法不正确,则需要重新生成签名。 4. 检查请求的URL是否正确。AWS要求请求的URL必须与您在签名中指定的URL相同。如果URL不正确,则需要重新生成签名。 5. 检查请求的头信息是否正确。AWS要求请求的头信息必须与您在签名中指定的头信息相同。如果头信息不正确,则需要重新生成签名。 6. 检查签名算法是否正确。AWS支持多种签名算法,包括AWS Signature Version 4和AWS Signature Version 2。确保您使用的是正确的签名算法。 7. 检查请求的有效载荷是否正确。如果请求包含有效载荷,则需要将有效载荷包含在签名中。如果有效载荷不正确,则需要重新生成签名。 8. 检查请求的区域是否正确。AWS要求请求的区域必须与您在签名中指定的区域相同。如果区域不正确,则需要重新生成签名。 以下是一个示例代码,展示如何使用AWS SDK for Python (Boto3)生成签名并发送请求: ```python import boto3 from botocore.exceptions import ClientError # 创建S3客户端 s3 = boto3.client('s3') # 生成签名并发送请求 try: response = s3.list_buckets() except ClientError as e: print(e.response['Error']['Message']) else: print('Buckets:') for bucket in response['Buckets']: print(f' {bucket["Name"]}') ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值