pypsark的dataframe经常遇到要做groupby的场景,有两种方式可以实现,dataframe的pandas_udf、rdd的mapValues,后者需要先将dataframe转换成rdd再操作
下面介绍dataframe的pandas_udf代码实现,由于pandas_udf做groupby传入参数只能是函数名,不能传入其他参数
@pandas_udf(schema1, functionType=PandasUDFType.GROUPED_MAP)
def ftscore2(df):
print('ok')
mid = df.groupby(['device_number']).apply(lambda x: ftScore(x))
print("mid",mid)
result = pd.DataFrame(mid)
return result
aa = ldf2.groupby(['device_number']).apply(ftscore2)
aa.show()
所以想传入参数需要再里面再嵌套一个python函数
def flow_sum_app(df4):
rst=np.sum(df4[["total_flow"]])[0]
return rst
fcls = [flow_sum_app]
cmls=[]
appls=['金融理财','商务办公','旅游出行','网上购物','小额借贷','财经资讯','头部银行','地方银行','商业银行']
fcls1 = ["flow_sum_app"]
for i in appls:
for f1 in fcls1:
cmls.append(i+"_"+f1)
schema3 = StructType([
StructField("device_number", StringType()),
])
for i in cmls:
schema3.add(i,DoubleType(), True)
print(schema3)
def ft7(df3,appls,fcls,fcls1):
rsls=[df3['device_number'].iloc[0]]
for i in appls:
df4=df3[df3.prod_label_name3==i]
for f in fcls:
rsls.append(f(df4))
cmls=["device_number"]
for i in appls:
for f1 in fcls1:
cmls.append(i+"_"+f1)
df5= pd.DataFrame(data=[rsls],columns=cmls)
# print("df5",df5)
return df5
@pandas_udf(schema3, functionType=PandasUDFType.GROUPED_MAP)
def ftscore6(df3):
return ft7(df3,lb1,fcls,fcls1)
from pyspark.sql.functions import pandas_udf,PandasUDFType
ldf2 = spark.createDataFrame(
[("a", '金融理财',11), ("a", '小额借贷',12), ("b", '小额借贷',8), ("b", '旅游出行',10)],
("device_number", "prod_label_name3","total_flow"))
aa = ldf2.groupby(['device_number']).apply(ftscore6)
aa.show()
解读:
ftscore6 是个pandas_udf函数,接受pyspark 的dataframe,接收进来按照pandas的dataframe处理,但这个dataframe是带着key(device_number)的dataframe(此时由于groupby了,所以key都是一样的),不能接受其他参数
ft7是个纯python函数,接收pandas dataframe,可以接收其他参数