在一次执行spark任务的时候出现了以下错误
at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:933)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply$mcV$sp(Dataset.scala:2736)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2736)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2736)
at org.apache.spark.sql.Dataset$$anonfun$withNewRDDExecutionId$1.apply(Dataset.scala:3350)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
…
Caused by: java.io.UTFDataFormatException: encoded string too long: 105049 bytes
at java.io.DataOutputStream.writeUTF(DataOutputStream.java:364)
排查:虽然是通过typesafe报的错,当时怀疑是配置文件过大导致的,一看文件也不大。后面发现引入了一个读取文件的config对象,这个对象把它前面加上修饰符
@transient即可避免参与序列化,这样就避免了文件过大的问题