使用spark时,需要理解其execution process和{}(pyspark - http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html)。它与pandas/python执行完全不同。它的执行取决于lazy evaluation,每当您需要检查数据时,您都需要执行一个操作,如show、first、collect或{}。如果没有这些操作,它将在dataframe上返回schema(所以在您的问题中)。在
我来给你介绍一些示例:-在process_df = sqlContext.createDataFrame([
['2013-01-01','U2_P1','p@c.com','100','P_P'],
['2013-01-01','U2_P2','p@c.com','100','P_P1'],
['2014-01-01','U2_P1','p@c.com','100','P_P'],
['2014-01-01','U2_P2','p@c.com','100','P_P1'],
['2015-01-01','U2_P1','p@c.com','100','P_P'],
['2015-01-01','U2_P2','p@c.com','100','P_P1']
], ['date','p1id','p2id','amount','p3id'])
#This prints Schema instead of Data
print process_df
DataFrame[date: string, p1id: string, p2id: string, amount: string, p3id: string]
#This prints data instead of schema
process_df.show()
+ + -+ -+ + +
| date| p1id| p2id|amount|p3id|
+ + -+ -+ + +
|2013-01-01|U2_P1|p@c.com| 100| P_P|
|2013-01-01|U2_P2|p@c.com| 100|P_P1|
|2014-01-01|U2_P1|p@c.com| 100| P_P|
|2014-01-01|U2_P2|p@c.com| 100|P_P1|
|2015-01-01|U2_P1|p@c.com| 100| P_P|
|2015-01-01|U2_P2|p@c.com| 100|P_P1|
+ + -+ -+ + +
agg_data = process_df.groupby(['date']).agg({'amount':'sum'})
#This prints Schema instead of Data
print agg_data
DataFrame[date: string, sum(amount): double]
from pyspark.sql import functions as F
#This prints data instead of schema
agg_data.show()
+ + -+
| date|sum(amount)|
+ + -+
|2015-01-01| 200.0|
|2014-01-01| 200.0|
|2013-01-01| 200.0|
+ + -+
from pyspark.sql import functions as F
agg_data.select('date', F.col('sum(amount)').alias('sum')).show()
+ + -+
| date| sum|
+ + -+
|2015-01-01|200.0|
|2014-01-01|200.0|
|2013-01-01|200.0|
+ + -+This is an example to print only data if you need to take this data in
python then need to use collect, take, first, head. Here are a few
examples:-
^{pr2}$
This is how we can take data to python and can wrangle over it.Hope this will help a lot.