情况
我们在使用spark进行运算时,经常需要使用udf进行自定义函数。
当我们自定义的函数参数个数小于等于10个时,udf能够正常编译运行。
例如
val makeParams: (String, String, String, String, String, String, String, String, String, String) => TestProperty
= (orderId: String, barcode: String, deliveryId: String, mailNo: String, expressCode: String, platformCode: String, areaName: String, productId: String, skuId: String, shopName: String String) => {
val TestEvent: TestProperty = new TestProperty(orderId, barcode, deliveryId, mailNo, expressCode, platformCode, areaName, productId, skuId, shopName)
TestEvent
}
val make_params = udf(makeParams)
val result = waitWriteDataFrame
.withColumn("properties", make_params('orderId, 'barcode, 'deliveryId, 'mailNo, 'expressCode, 'platformCode, 'areaName, 'productId, 'skuCode))
.withColumn("type", lit("track"))
.withC