有一个像这样的pyspark
data = [("1", "a"), ("2", "a"), ("3", "b"), ("4", "a")]
df = spark.createDataFrame(data).toDF(*("id", "name"))
df.show()
+---+----+
| id|name|
+---+----+
| 1| a|
| 2| a|
| 3| b|
| 4| a|
+---+----+
按这个dataframe的名称列分组
df.groupBy("name").count().show()
+----+-----+
|name|count|
+----+-----+
| a| 3|
| b| 1|
+----+-----+
from pyspark.sql import functions as F
data = [("1", "a"), ("2", "a"), ("3", "b"), ("4", "a")]
df = spark.createDataFrame(data).toDF(*("id", "name"))
df.groupBy("name").count().where(F.col('count') < 3).show()
F
是函数的别名,您可以使用任何需要的标识符,但它通常写为F
或func
,这只是个人习惯。
result:
+----+-----+
|name|count|
+----+-----+
| b| 1|
+----+-----+
# 可以多个列名同时使用groupby
duplicates_df = sc.sql(hive_read).groupBy("t", "g").count()
# 筛选出重复次数大于50的行
duplicates_df = duplicates_df.filter(duplicates_df["count"] > 50)
# 按照 count 列进行降序排序
sorted_duplicates_df = duplicates_df.orderBy(duplicates_df["count"].desc())
# 提取所有重复的t值,即满足上述重复条件的,发生过重复情况的t值
duplicate_tids = duplicates_df.select("t","g").distinct()