Flink minicluster 报错,因为 JDK 版本引起的错误

2022-10-24 15:06:58.411 [ORC_GET_SPLITS #1] WARN  org.apache.hadoop.hdfs.client.impl.BlockReaderFactory  - I/O error constructing remote block reader.
java.io.IOException: java.security.InvalidKeyException: Illegal key size
	at org.apache.hadoop.crypto.JceAesCtrCryptoCodec$JceAesCtrCipher.init(JceAesCtrCryptoCodec.java:116)
	at org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:295)
	at org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:308)
	at org.apache.hadoop.crypto.CryptoInputStream.<init>(CryptoInputStream.java:133)
	at org.apache.hadoop.crypto.CryptoInputStream.<init>(CryptoInputStream.java:114)
	at org.apache.hadoop.crypto.CryptoInputStream.<init>(CryptoInputStream.java:138)
	at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:393)
	at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:606)
	at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
	at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
	at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
	at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:227)
	at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:170)
	at org.apache.hadoop.hdfs.DFSUtilClient.peerFromSocketAndKey(DFSUtilClient.java:731)
	at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2942)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:822)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:747)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:380)
	at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:732)
	at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1137)
	at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1089)
	at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1445)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1409)
	at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
	at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:573)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:369)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:61)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:111)
	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:1696)
	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1584)
	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$3000(OrcInputFormat.java:1371)
	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1556)
	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:1553)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1553)
	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:1371)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.security.InvalidKeyException: Illegal key size
	at javax.crypto.Cipher.checkCryptoPerm(Cipher.java:1039)
	at javax.crypto.Cipher.implInit(Cipher.java:805)
	at javax.crypto.Cipher.chooseProvider(Cipher.java:864)
	at javax.crypto.Cipher.init(Cipher.java:1396)
	at javax.crypto.Cipher.init(Cipher.java:1327)
	at org.apache.hadoop.crypto.JceAesCtrCryptoCodec$JceAesCtrCipher.init(JceAesCtrCryptoCodec.java:113)
	... 42 common frames omitted

Flink 在本地运行 任务 报上面的错误是因为 JDK 版本问题。
1.8_144 会出现此问题。
但是1.8_201 就没有此报错。

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值