Copy link
Contributor
Can you provide more information about the intermittent failures? It's hard to reproduce this specific issue. If you can reproduce this and print the results of
console.log(this.httpResponse)
and
console.log(this.request.httpRequest)
That would be helpful.
For example:
ec2.client.copyImage(params, function (err, data) {
if (err) {
console.log("Got error:", err.message);
console.log("Request:");
console.log(this.request.httpRequest);
console.log("Response:");
console.log(this.httpResponse);
}
// ...
});
Copy link
Author
Request:
{ method: 'POST',
path: '/',
headers:
{ 'User-Agent': 'aws-sdk-nodejs/v0.9.7-pre.8 linux/v0.8.17',
'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8',
'Content-Length': 228 },
body: 'AWSAccessKeyId=*something*&Action=DescribeSecurityGroups&Signature=WVNJG7aKN3fBd%2FFIivanvr3jRkZSrXiD6GWKfrMCAwI%3D&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2013-04-11T07%3A11%3A00.620Z&Version=2013-02-01',
endpoint:
{ protocol: 'https:',
slashes: true,
host: 'ec2.us-east-1.amazonaws.com',
hostname: 'ec2.us-east-1.amazonaws.com',
href: 'https://ec2.us-east-1.amazonaws.com/',
pathname: '/',
path: '/',
port: 443,
constructor: { [Function: Endpoint] __super__: [Function: Object] } },
region: 'us-east-1',
params:
{ params:
[ [Object],
[Object],
[Object],
[Object],
[Object],
[Object],
[Object] ] } }
Response:
{ statusCode: 403,
headers:
{ 'transfer-encoding': 'chunked',
date: 'Thu, 11 Apr 2013 07:11:02 GMT',
server: 'AmazonEC2' },
body: }
Copy link
Contributor
It looks like the request is being sent to the us-east-1 endpoint, which from your description is not what you are trying to do. Are you providing the correct region to either the global config or EC2 object?
If so, perhaps this is an issue only on retries, which could explain the "randomness". Adding a console.log line for this.retryCount (or even just this) would help to show if this was triggered by a retry or not.
Copy link
Contributor
I'm getting this error (SignatureDoesNotMatch) with s3 getBucketTagging using v0.9.8-pre.9 installed via npm.
var aws = require('aws-sdk');
aws.config.loadFromPath(path.join('.', 'aws-credentials.json'));
aws.config.update({region: 'us-east-1'});
s3 = new aws.S3();
s3.client.getBucketTagging({
Bucket: ''
}, function(err, data) {
if (err) {
return console.log('Error getting bucket tagging: ' + JSON.stringify(err));
}
console.log(JSON.stringify(data));
});
output:
Error getting bucket tagging: {"message":"The request signature we calculated does not match the signature you provided. Check your key and signing method.","code":"SignatureDoesNotMatch","name":"SignatureDoesNotMatch","statusCode":403,"retryable":false}
Using the same s3 client, other operations (at least listBuckets) do work.
Any help would be appreciated. I'm trying to set up cost allocation billing and I want to add a standard set of tags to all our assets and doing so by hand would be ... tedious :)
👍
3
Copy link
Contributor
Closed by #107
👎
21
Copy link
@pilani - It's been a long while since you posted this question, but, I ran into a similar issue.
Essentially, the library I was using to generate the Signature query string parameter wasn't escaping spaces. So, every so often a value would be generated that contained a space - and I'd see a 403. Ensuring proper encoding of URI query string parameter-values did the trick.
Erin
In case this helps anyone: I was getting this same signature failure, but the mistake I was making was including an extra HTTPS header (Content-type) which is apparently used to calculate the signature.
Frustrating to track down, for sure, but ultimately my fault.
👍
36
😄
4
🎉
2
❤️
10
Copy link
I fought with this error for 2 days, until I generated a new set of keys for my account, after which everything worked magically. Note that my old keys were very old (from 2006). A nice error message of Expired Keys or Obsolete keys or something would have been very helpful here. I realize that this is probably not something that can be detected by the JS library, but maybe a request upstream to add this to AWS in general would be in order here...
👍
13
😄
5
Copy link
i try to create bucket
var http = require('http')
var crypto = require("crypto")
var isoDate =new Date();
var sc = crypto.createHmac('sha1', "my-key").update(new Buffer("PUT\n\n\n"+isoDate+"\n/erka", 'utf-8')).digest('base64');
var options = {
port:80,
hostname: "s3.kaya.pvt",
headers:{
Host: "erka.s3.kaya.pvt",
Authorization: 'AWS our-id:'+sc,
Date: new Date(),
"Content-Length": 0,
},
}
callback = function(response) {
var str = '';
response.on('data', function (chunk) {
console.log("-------------");
str += chunk;
console.log(chunk.toString());
});
response.on('end', function () {
console.log("**********");
console.log(str);
});
}
http.request(options, callback).end();
this return
SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your AWS secret access key and signing method.erka/
where is problem? anyone can help?
Copy link
Contributor
@erdogankaya
It doesn't appear as though you're using the AWS SDK. It exposes an operation to create an s3 bucket:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createBucket-property
Please take a look at our getting started guide for information on how to configure and use the SDK:
http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-intro.html
Copy link
@chrisradek
we use our s3 . Dont use amazon s3. so i cant use aws-sdk. i try to http protocol for create bucket. but its said signature fault. Can u help?
I am having the same issue, but it looks like in the GET (I am trying to retrieve the file)
Problem accessing S3. Status 403, code SignatureDoesNotMatch, message 'SignatureDoesNotMatch'
Original xml:
Some(SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.AKIAI3DOSLNJC4YGPZSQAWS4-HMAC-SHA256
20160627T212133Z
20160627/eu-west-1/s3/aws4_request
f8795580f6de5742706adab4ca6c89f8bfa9565b1e0c837a54297a2dcd8114df0dd99e84f2424d8d46dcff6119a9e34fa049489106149214caefe0fc1bd2d21341 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 31 36 30 36 32 37 54 32 31 32 31 33 33 5a 0a 32 30 31 36 30 36 32 37 2f 65 75 2d 77 65 73 74 2d 31 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 66 38 37 39 35 35 38 30 66 36 64 65 35 37 34 32 37 30 36 61 64 61 62 34 63 61 36 63 38 39 66 38 62 66 61 39 35 36 35 62 31 65 30 63 38 33 37 61 35 34 32 39 37 61 32 64 63 64 38 31 31 34 64 66GET
/test%C2%ADtest%C2%ADtest%C2%ADfilipe/1234.jpg
What can I do?
Copy link
Contributor
Hi @filipegmiranda
Can you provide the relevant code you are running to provide more context? What SignatureVersion are you using, and are you specifying the correct region for your bucket? Are you using the AWS SDK, and if so, what version? Are you using Amazon S3, or a third party S3 clone?
I am using play-s3 - it is a Scala Library that is suppose to work fine with Play Framework.
Here is my complete stack:
Scala
Play Framework 2.5
I am trying to retrieve a file from my Bucket, my credentials are fine. And yes, I am providing the region, which is the correct one:
Here are my properties in application.conf:
aws.accessKeyId="AAAA" // Not the real one
aws.secretKey="AAAA" //NOt the real one
aws.bucket="filipe-bucket"
s3.region="eu-west-1"
I have just posted a question with all of this information also in StackOverFlow:
My next step is to give up on this library, which appear to be really nice and try to do with the Java SDK from Amazon directly.
Copy link
Contributor
Hi @filipegmiranda
Since you are getting a signature mismatch error, can you provide the signature version you are using (v2 or v4)? What version of the AWS SDK are you using? How are you instantiating the S3 service object? What options are you passing in to the S3 constructor when you instantiate it?
Note that the endpoints configured on an S3 service object is determined when you instantiate it, so changing the region on it later (s3.region = "eu-west-1") won't correct the endpoint to the right region. You'll need to pass in the region as an option to the S3 constructor when instantiating it. The latest version of the SDK supports redirecting the endpoint to the right region for Amazon S3 but is not able to override custom endpoints if you are using a 3rd party S3 clone.
I am using v4, I actually don't know which version of the SDK, since I am using using
play-s3, I don't think it even uses SDK behind the scenes.
:)
I will try to use the official AWS client
Regenerating keys worked for me.
🎉
2
❤️
2
Yes guys I tested this 3 times now and the solution is to regenerate the secret_key.
From https://forums.aws.amazon.com/thread.jspa?messageID=733434 this error is
mistyped Access Key or Secret Access Key.
Hi, I have related issue. I'm working with device farm and uploading my app and tests there using aws-sdk npm (v2.6.4).
var params = {
name: 'my.apk',
projectArn: 'arn:aws:devicefarm:somestuff',
type: 'ANDROID_APP',
contentType: 'application/octet-stream'
};
devicefarm.createUpload(params, callback);
So after I call devicefarm.createUpload(options, callback) I receive a response json with upload info and pre-signed s3 upload url.
{ upload:
{ arn: 'arn:aws:devicefarm:somearn',
name: 'my.apk',
created: Thu Sep 22 2016 15:33:43 GMT+0000 (UTC),
type: 'ANDROID_APP',
status: 'INITIALIZED',
url: 's3 url here'
}
}
Then I'm trying to perform PUT request to actually upload the file and getting this SignatureDoesNotMatch response.
I've tried to make same PUT request from terminal using curl and I'm getting the same error.
BUT if I call create upload from aws cli with the same parameters
aws devicefarm create-upload --project-arn=arn:aws:devicefarm:somestuff --name=my.apk --type=ANDROID_APP
and obtain another pre-signed url then my curl PUT request is successful. So I assume there is some problem with signing url inside sdk.
AWS CLI and aws-sdk are initialized with the same credentials. I do not have an opportunity to regenerate secret key as proposed and I'm not sure this is the solution because with cli it works fine.
Any ideas?
The request signature we calculated does not match the signature you provided. Check your key and signing method
getting this error on ubuntu os ,
but not on mac os.
Why?
👍
2
I'm facing exactly the same situation as @rishaselfing has just described. Any thoughts on it?
I got it working after I specified in the header of my request the "content-type": "application/octet-stream"
👍
4
🚀
1
i changed the header of my request the "content-type": "application/x-www-form-urlencoded" then it works
👍
1
I had the same error and my simple correction was that I was using heroku which was an https and I was calling the non-secure http.
Added the 's' and mine worked.
Copy link
I had to simply reset my credentials locally, not sure why, strange. Everything had been working fine for a week.
what's weird is when I did the command to configure aws, it showed that I did have a key and secret already (same one I've always been using) but for some reason it it was throwing up today and resetting it worked.
4 hidden items
Load more…
Generating new keys for my account got me past this issue.
👍
2
🎉
1
Copy link
I got this same issue. What ended up being the cause is that I was accidentally sending a Content-Length of -1. Once I sent the correct Content-Length, it worked great.
I got the same issue, the issue was there was white-space at the end of the AWS key, which was ignored by VMs, but when i migrating to container, it was considering the white-space. This was really a stupid mistake from my side. Hope it will help someone
👍
3
Thanks @Deepak275 i think i was getting this error because the key was not being set properly.
In my case, it was because I did not have 'bucket' set, i.e.
const s3 = new aws.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_KEY,
bucket: 'BUCKET_NAME'
})
I have the same problem when I deploy to my Kubernetes cluster. But everything works fine when using a ubuntu vm or local (macOS). No white-space at the end of the AWS key.
If you are using correct and working AWS Access Key but wrong AWS Secret Key you will get this error, double check your keys.
Copy link
II ran into the same issue. I was using an IAM user with an old access key that didn't have S3FullAccess in the group permissions. Once I added the correct permissions and created a new access key it worked! Hopefully that fixes it!
👍
1
Copy link
I got this error with a secret key that contained a slash /. As per the following thread https://forums.aws.amazon.com/thread.jspa?threadID=234825 I regenerated my keys and it worked again.
Pretty annoying bug to track down thank goodness for this Git
Copy link
To clarify this, is there a bug in the library where when the secret key contains a / then the presigned urls do not work?
Lots of scenarios producing the same error I see... In my case having a / made no difference. I had to add a profile to .aws/credentials and add --profile to the command line.
I cannot for the life of me figure this one out.
I've tried everything I've seen across the internet: add 'Content-Length' to PUT headers, remove 'Content-Type' from PUT headers, change signatureVersion to v2, add 'Content-Type': "application/octet-stream", regenerate new root user keys, follow 4 different tutorials. 13 hours. Nothing. Nothing. What is going on? Any help appreciated.
My network request details:
Request URL: https://myApp.s3.amazonaws.com/20190420-c6d2o-praca-stanislawa-szukalskiego-z-napisem-jpg?Content-Type=image%2Fjpeg&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI4QEZ34RM4QNQDUQ%2F20190420%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190420T190739Z&X-Amz-Expires=60&X-Amz-Signature=f917b36a6f85681a537db40cb170783f2f128de6c3a9aa6a1aa57e6efb8dd233&X-Amz-SignedHeaders=host%3Bx-amz-acl&x-amz-acl=public-read
Request Method: PUT
Status Code: 403 Forbidden
Remote Address: 52.216.96.67:443
Referrer Policy: no-referrer-when-downgrade
Access-Control-Allow-Methods: POST, GET, PUT, DELETE, HEAD
Access-Control-Allow-Origin: *
Connection: close
Content-Type: application/xml
Date: Sat, 20 Apr 2019 19:07:39 GMT
Server: AmazonS3
Transfer-Encoding: chunked
Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
x-amz-id-2: KyLLmDSf3OJr3MSkkhls+JJ9Z6IEao8H2OHD7nwKwxEE6IsSNG+I6zc3+zJ369pX0PU+DinWqcc=
x-amz-request-id: 92EF40B39F125B77
Provisional headers are shown
Content-Type: image/jpeg
Origin: http://localhost:8080
Referer: http://localhost:8080/@/oznekenzo/list/How%20To%20SQL/2/edit
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36
Content-Type: image/jpeg
X-Amz-Algorithm: AWS4-HMAC-SHA256
X-Amz-Credential: AKIAI4QEZ34RM4QNQDUQ/20190420/us-east-1/s3/aws4_request
X-Amz-Date: 20190420T190739Z
X-Amz-Expires: 60
X-Amz-Signature: f917b36a6f85681a537db40cb170783f2f128de6c3a9aa6a1aa57e6efb8dd233
X-Amz-SignedHeaders: host;x-amz-acl
x-amz-acl: public-read
I ran into this, in a yml-based CI environment. It was due to a "/" in the secret key value.
I had the same exception. Creating a new key didn't resolve it. My error was accidentally using "mybucket/myfolder" as the bucket, instead of using myfolder as part of the key.
s3.putObject(PutObjectRequest.builder().bucket(s3bucket).key(key).build(),
RequestBody.empty());
I had same issue.
I was using a POST method with the presigned url , s3 accept PUT method for the same.
Making PUT request resolved my issue
I'm getting this error with the go sdk. Not sure why, but thinking I may just work around it by managing the file transfers to S3 myself.
Copy link
I ran into this, in a yml-based CI environment. It was due to a "/" in the secret key value.
I seem to have a "/" in my secret as well. Wonder if that's my problem?
I am literally stuck in this error for a few days now. I've tried lots of things like content-length, content-type, etc and nothing works. I am working with aws api gateway and GET request works fine. However, whenever I try sending POST and PUT requests, that 403 Invalid Signature Exception arose. Could anyone figure out what is the problem here?
I had accidentally generated an upload url and was trying to use it as a download url. After I changed that, it worked for me.
i changed the header of my request the "content-type": "application/x-www-form-urlencoded" then it works
with aws sdk v2 for php :D
work for me.thanks
Copy link
May 28, 2019
•
edited
I had this problem. For me the fix was to add the signatureVersion param to the AWS.S3 method like so:
const s3 = new AWS.S3({
accessKeyId: config.awsKey,
secretAccessKey: config.awsSecret,
region: config.awsRegion,
signatureVersion: 'v4',
});
🎉
1
Copy link
I have a similar version to @BabyYawLwi, I use the JS API method and the same credentials to generate GET and PUT links. GET works, but PUT doesn't.
So it's not a problem of a typo or slash in credentials, should not be a problem of headers... Tried most of the solutions in this thread and nothing worked so far.
Changing access keys worked for me.
I was inserting metadata with symbols like "'" and it was returning this error... check your strings!
I guess the real problem is "/" in the AWS_SECRET_ACCESS_KEY and heroku environment variable is bad at parsing special characters like "/". I generated a new key for the user which does not have "/" and it worked for me. It worked in my third try when I was lucky to get a secret without "/". I cannot see any other reason in my case.
I got this error with a secret key that contained a slash /. As per the following thread https://forums.aws.amazon.com/thread.jspa?threadID=234825 I regenerated my keys and it worked again.
this solved my problem
Copy link
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.
lock
bot
locked as resolvedand limited conversation to collaborators
Sep 28, 2019