今天运行代码出现了一个让我很头痛的问题
- 发现自己能力有限,参考大神的解决方法
Caused by: java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of bigint if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, date), LongType) AS date#21L if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, user_id), LongType) AS user_id#22L if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, session_id), StringType), true, false) AS session_id#23 if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 3, page_id), LongType) AS page_id#24L if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 4, action_time), StringType), true, false) AS action_time#25 if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 5, search_keyword), StringType), true, false) AS search_keyword#26 if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 6, click_category_id), LongType) AS click_category_id#27L if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 7, click_product_id), LongType) AS click_product_id#28L if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 8, order_category_ids), StringType), true, false) AS order_category_ids#29 if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 9, order_product_ids), StringType), true, false) AS order_product_ids#30 if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 10, pay_category_ids), StringType), true, false) AS pay_category_ids#31 if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 11, pay_product_ids), StringType), true, false) AS pay_product_ids#32 at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:291) at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:589) at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:589) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of bigint at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.If$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.writeFields_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:288) ... 16 more
- 通过分析异常堆栈可知,
org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0方法发生了异常,createFileWithMode0方法的实现如下:
/** Wrapper around CreateFile() with security descriptor on Windows */ private static native FileDescriptor createFileWithMode0(String path, long desiredAccess, long shareMode, long creationDisposition, int mode) throws NativeIOException;
- 通过代码可知,这个方法是hadoop不支持的方法。那么为什么会调用这个方法,通过异常堆栈继续向上追踪,
是在 org.apache.hadoop.fs.RawLocalFileSystem L o c a l F S F i l e O u t p u t S t r e a m . < i n i t > 过 程 调 用 了 n a t i v e i o . N a t i v e I O LocalFSFileOutputStream.<init> 过程调用了nativeio.NativeIO LocalFSFileOutputStream.<init>过程调用了nativeio.NativeIOWindows类,对应的方法如下:
private LocalFSFileOutputStream(Path f, boolean append,
2 FsPermission permission) throws IOException {
3 File file = pathToFile(f);
4 if (permission == null) {
5 this.fos = new FileOutputStream(file, append);
6 } else {
7 if (Shell.WINDOWS && NativeIO.isAvailable()) {
8 this.fos = NativeIO.Windows.createFileOutputStreamWithMode(file,
9 append, permission.toShort());
10 } else {
11 this.fos = new FileOutputStream(file, append);
12 boolean success = false;
13 try {
14 setPermission(f, permission);
15 success = true;
16 } finally {
17 if (!success) {
18 IOUtils.cleanup(LOG, this.fos);
19 }
20 }
21 }
22 }
23 }
- 通过调用椎栈可知是上述代码第8行调用了NativeIO.Windows类。那么if判断应该是成立的,分析NativeIO.isAvailable方法代码如下:
/**
2 * Return true if the JNI-based native IO extensions are available.
3 */
4 public static boolean isAvailable() {
5 return NativeCodeLoader.isNativeCodeLoaded() && nativeLoaded;
6 }
- isAvailable方法主要是调用NativeCodeLoader.isNativeCodeLoaded方法
static {
2 // Try to load native hadoop library and set fallback flag appropriately
3 if(LOG.isDebugEnabled()) {
4 LOG.debug("Trying to load the custom-built native-hadoop library...");
5 }
6 try {
7 System.loadLibrary("hadoop");
8 LOG.debug("Loaded the native-hadoop library");
9 nativeCodeLoaded = true;
10 } catch (Throwable t) {
11 // Ignore failure to load
12 if(LOG.isDebugEnabled()) {
13 LOG.debug("Failed to load native-hadoop with error: " + t);
14 LOG.debug("java.library.path=" +
15 System.getProperty("java.library.path"));
16 }
17 }
18
19 if (!nativeCodeLoaded) {
20 LOG.warn("Unable to load native-hadoop library for your platform... " +
21 "using builtin-java classes where applicable");
22 }
23 }
24
25 /**
26 * Check if native-hadoop code is loaded for this platform.
27 *
28 * @return <code>true</code> if native-hadoop is loaded,
29 * else <code>false</code>
30 */
31 public static boolean isNativeCodeLoaded() {
32 return nativeCodeLoaded;
33 }
- 通过可以看到,isNativeCodeLoaded方法就是返回一个属性值,那么问题出现在什么地方呢?
经过分析NativeCodeLoaded类的静态构造函数,有一个“System.loadLibrary(“hadoop”)”方法。 是不是这个方法导致的呢?通过在其他同事环境上调试,System.loadLibrary(“hadoop”) 会异常,从而运行catch部分,但是本人电脑却不会异常,直接继续运行。那么System.loadLibrary方法是什么用途呢,通过分析源码知道,这个方法是加载本地系统和用户的环境变量的。进而分析是因为本人在C:\Windows\System32目录下有hadoop.dll文件或环境变量Path中配置了%Hadoop_Home%/bin目录而导致的。
简而言之,是因为配置的系统环境变量Path的任意目录下存在hadoop.dll文件,从而被认为这是一个hadoop集群环境,但是hadoop集群又不支持window环境而产生的异常。处理方法也很简单,检查系统环境变量Path下的每一个目录,确保没有hadoop.dll文件即可。
如果是删除系统环境变量Path的某一个目录,需要重启Intellij Idea后ClassLoader类中的usr_paths或sys_paths才会生效。