Spark java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32

本文解决了Spark程序在使用saveAsTextFile时遇到的错误,该错误源于Spark版本与Hadoop版本不匹配。提供了两种解决方案:一是重新下载并配置对应版本的Spark;二是自行编译Spark源码。

环境: Spark1.3-Hadoop2.6-bin 、Hadoop-2.5
在运行Spark程序写出文件(savaAsTextFile)的时候,我遇到了这个错误:

java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;J)V
at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
at org.apache.hadoop.hdfs.BlockReaderLocal.doByteBufferRead(BlockReaderLocal.java:338)
at org.apache.hadoop.hdfs.BlockReaderLocal.fillSlowReadBuffer(BlockReaderLocal.java:388)
at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:408)
at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:698)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:752)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:495)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:582)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:790)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
at java.lang.Thread.run(Thread.java:744)

查到的还是什么window远程访问Hadoop的错误,最后查阅官方文档HADOOP-11064

HADOOP-11064

看描述可以清楚这是Spark版本与Hadoop版本不适配导致的错误,遇到这种错误的一般是从Spark官网下载预编译好的二进制bin文件。

因此解决办法有两种:
1. 重新下载并配置Spark预编译好的对应的Hadoop版本
2. 从官网上下载Spark源码按照预装好的Hadoop版本进行编译(毕竟Spark的配置比Hadoop轻松不少)。

E:\anaconda3\anaconda\python.exe G:/dashuju/zshixun/daima/ider_daima/lyh_git/offline-pyspark/gmall_product/new1.py Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 25/07/31 10:36:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Traceback (most recent call last): File "G:\dashuju\zshixun\daima\ider_daima\lyh_git\offline-pyspark\gmall_product\new1.py", line 39, in <module> .parquet(mock_data_path) \ ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\86158\AppData\Roaming\Python\Python312\site-packages\pyspark\sql\readwriter.py", line 544, in parquet return self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\86158\AppData\Roaming\Python\Python312\site-packages\py4j\java_gateway.py", line 1322, in __call__ return_value = get_return_value( ^^^^^^^^^^^^^^^^^ File "C:\Users\86158\AppData\Roaming\Python\Python312\site-packages\pyspark\errors\exceptions\captured.py", line 179, in deco return f(*a, **kw) ^^^^^^^^^^^ File "C:\Users\86158\AppData\Roaming\Python\Python312\site-packages\py4j\protocol.py", line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o37.parquet. : java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:793) at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1249) at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1454) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:601) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1972) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:2014) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:761) at org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:180) at org.apache.spark.util.HadoopFSUtils$.$anonfun$parallelListLeafFilesInternal$1(HadoopFSUtils.scala:95) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at scala.collection.TraversableLike.map(TraversableLike.scala:286) at scala.collection.TraversableLike.map$(TraversableLike.scala:279) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at org.apache.spark.util.HadoopFSUtils$.parallelListLeafFilesInternal(HadoopFSUtils.scala:85) at org.apache.spark.util.HadoopFSUtils$.parallelListLeafFiles(HadoopFSUtils.scala:69) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.bulkListLeafFiles(InMemoryFileIndex.scala:162) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.listLeafFiles(InMemoryFileIndex.scala:133) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:96) at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.<init>(InMemoryFileIndex.scala:68) at org.apache.spark.sql.execution.datasources.DataSource.createInMemoryFileIndex(DataSource.scala:539) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:405) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:229) at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:211) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211) at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:563) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.lang.Thread.run(Thread.java:748) 怎么解决
08-01
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值