报错信息:
Caused by: org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist onxxxxx: com.amazonaws.SdkClientException: Unable to execute HTTP request: Unsupported or unrecognized SSL message: Unable to execute HTTP request: Unsupported or unrecognized SSL message
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:177)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:372)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:308)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:227)
at
注意点:
本次使用的是seatunnel2.3.7版本,且启动方式为SeaTunnel Zeta S3File--->PG
解决过程:
我查到的基本都是说加密或者证书的问题,其实我犯的错误巨低级,特此记录一下,单纯就是脚本有误!!!(菜鸟的无奈)
source {
S3File {
path = "/minio/xxx.txt"
bucket = "s3a://xxxxxxxx"
fs.s3a.endpoint="s3.cn-north-1.amazonaws.com.cn"
fs.s3a.aws.credentials.provider="org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
fs.s3a.endpoint: "xxxxxx:xxxx"
secret_key=xxxxxxx
access_key=xxxxxx
file_format_type = "json"
schema {
fields {
id = string
name = string
}
}
}
}
根本原因就是fs.s3a.endpoint 这个正确的应该是 http://xxxx:xxx 这样的传输方式才是正确的,嗯,很低级,要记录。