hive udf kyroexception unable to find class

H

ive-On-Spark配置成功后,准备试用下Hive UDF是否能在Spark-on-Hive环境下正常使用:

 

set hive.execution.engine=spark;

add jar viewfs:///dirs/brickhouse-0.7.1-SNAPSHOT-jar-with-dependencies.jar;

create temporary function to_json AS 'brickhouse.udf.json.ToJsonUDF';

select to_json(app_metric) as tt from tbl_name where dt = '20180417' limit 10;

但在yarn-cluster模式下执行后会complain如下错误:

 

org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: brickhouse.udf.json.ToJsonUDF

Serialization trace:

genericUDF (org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc)

colExprMap (org.apache.hadoop.hive.ql.exec.SelectOperator)

childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)

aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)

invertedWorkGraph (org.apache.hadoop.hive.ql.plan.SparkWork)

at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)

at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:181)

at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:118)

at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176)

at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161)

at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214)

at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)

at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176)

at org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:134)

at org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:40)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214)

at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)

at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176)

at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161)

at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214)

at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)

at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176)

at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:153)

at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214)

at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)

at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)

at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:686)

at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:206)

at org.apache.hadoop.hive.ql.exec.spark.KryoSerializer.deserialize(KryoSerializer.java:60)

at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient$JobStatusJob.call(RemoteHiveSparkClient.java:329)

at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:358)

at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:323)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.ClassNotFoundException: brickhouse.udf.json.ToJsonUDF

at java.net.URLClassLoader.findClass(URLClassLoader.java:381)

at java.lang.ClassLoader.loadClass(ClassLoader.java:424)

at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

at java.lang.Class.forName0(Native Method)

at java.lang.Class.forName(Class.java:348)

at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:154)

... 47 more

此时如果只看stacktrace,感觉可能是jar文件没有分发到对应的RemoteDriver,导致类找不到。

MARK: debug除了看stacktrace,另一个思路是看看stacktrace上面的INFO。因为有些错误可能是fail-silent, 所以INFO里已经暴露出一些不正确的日志内容可以用于排查问题

拉取部分INFO日志内容如下:

 

18/04/23 18:51:39 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done

18/04/23 18:51:39 INFO spark.SparkContext: Added JAR viewfs://hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/hive-exec-2.3.2.jar at viewfs://hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/hive-exec-2.3.2.jar with timestamp 1524480699144

18/04/23 18:51:39 INFO spark.SparkContext: Added JAR viewfs://hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/kuaishou-analytics-auth-1.0.0.jar at viewfs://hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/kuaishou-analytics-auth-1.0.0.jar with timestamp 1524480699162

18/04/23 18:51:39 INFO spark.SparkContext: Added JAR viewfs://hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/brickhouse-0.7.1-SNAPSHOT-jar-with-dependencies-2.jar at viewfs://hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/brickhouse-0.7.1-SNAPSHOT-jar-with-dependencies-2.jar with timestamp 1524480699165

18/04/23 18:51:39 INFO storage.BlockManagerMasterEndpoint: Registering block manager bjlt-h1180.sy:37446 with 7.0 GB RAM, BlockManagerId(53, bjlt-h1180.sy, 37446)

18/04/23 18:51:39 INFO client.RemoteDriver: Received job request 2d71a807-c512-4032-8c9e-71a378d3168b

18/04/23 18:51:39 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.48.74.35:59232) with ID 41

18/04/23 18:51:39 INFO spark.ExecutorAllocationManager: New executor 41 has registered (new total is 53)

18/04/23 18:51:39 INFO storage.BlockManagerMasterEndpoint: Registering block manager bjlt-h1864.sy:36387 with 7.0 GB RAM, BlockManagerId(41, bjlt-h1864.sy, 36387)

18/04/23 18:51:39 INFO client.SparkClientUtilities: Added jar[file:/media/disk3/yarn_data/usercache/dp/appcache/application_1523431310007_1301182/container_e95_1523431310007_1301182_01_000001/viewfs:/hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/hive-exec-2.3.2.jar] to classpath.

18/04/23 18:51:39 INFO client.SparkClientUtilities: Added jar[file:/media/disk3/yarn_data/usercache/dp/appcache/application_1523431310007_1301182/container_e95_1523431310007_1301182_01_000001/viewfs:/hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/brickhouse-0.7.1-SNAPSHOT-jar-with-dependencies-2.jar] to classpath.

18/04/23 18:51:39 INFO client.SparkClientUtilities: Added jar[file:/media/disk3/yarn_data/usercache/dp/appcache/application_1523431310007_1301182/container_e95_1523431310007_1301182_01_000001/viewfs:/hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/kuaishou-analytics-auth-1.0.0.jar] to classpath.

18/04/23 18:51:39 INFO client.RemoteDriver: Failed to run job 2d71a807-c512-4032-8c9e-71a378d3168b

发现SparkContext的"Added JAR"语句对应的路径是正确的,但SparkClientUtilities对应的"Added JAR"是类似"file:/media/disk3/yarn_data/usercache/dp/appcache/application_1523431310007_1301182/container_e95_1523431310007_1301182_01_000001/viewfs:/hadoop-lt-cluster/tmp/hive/dp/_spark_session_dir/bc42b5c8-f183-4088-b238-2c3a75725d06/hive-exec-2.3.2.jar", 可见路径解析有问题。此时,去Hive源代码中review下发现code snippet如下:

 

private static URL urlFromPathString(String path, Long timeStamp,

Configuration conf, File localTmpDir) {

URL url = null;

try {

if (StringUtils.indexOf(path, "file:/") == 0) {

url = new URL(path);

} else if (StringUtils.indexOf(path, "hdfs:/") == 0) {

Path remoteFile = new Path(path);

Path localFile =

new Path(localTmpDir.getAbsolutePath() + File.separator + remoteFile.getName());

Long currentTS = downloadedFiles.get(path);

if (currentTS == null) {

currentTS = -1L;

}

if (!new File(localFile.toString()).exists() || currentTS < timeStamp) {

LOG.info("Copying " + remoteFile + " to " + localFile);

FileSystem remoteFS = remoteFile.getFileSystem(conf);

remoteFS.copyToLocalFile(remoteFile, localFile);

downloadedFiles.put(path, timeStamp);

}

return urlFromPathString(localFile.toString(), timeStamp, conf, localTmpDir);

} else {

url = new File(path).toURL();

}

} catch (Exception err) {

LOG.error("Bad URL " + path + ", ignoring path", err);

}

return url;

}

可见当前版本的Hive(0.23)只有对hdfs和file两种scheme的支持,不支持viewfs。增加适配viewfs的代码判断:else if (StringUtils.indexOf(path, "hdfs:/") == 0 || StringUtils.indexOf(path, "viewfs:/") == 0 ) {, 问题解决。

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值