Pyspark 数据类型及转换
- Spark 数据类型
ByteType, 1-byte
ShortType, 2-byte
IntegerType, 4-byte
LongType, 8-byte
FloatType, 4-type
DoubleType, 8-byte
DecimalType, arbitrary sided decimal numbers
StringType
BinaryType
BooleanType
TimetampType
DateType. Year month, day
ArrayType, 数组,可以包含空值
MapType, (keyType, valueType, valueContainsNull)
StructType(fields) , StructFiled(name, dataType, nullable)
2.查看数据类型
df.dtypes
3.数据类型转换
from pyspark.sql.types import IntegerType
(1) df = df.withColumn(‘str_col_int’, df1[‘str_col’].cast(‘int’))
(2) df.select(df(“colA”).cast(“int”)) 等价于 df.select(df(“colA”).cast(IntegerType))
3.字符串转为array
to_array = udf(lambda x: [x], ArrayType(StringType()))
4.拆分数组
有个需求, 将 values列拆开,每个元素形成单独的一列
key | values
1 |[0.2,0.3,0.4]
2 |[0.2,0.3,0.2]
key | value1 | value2 | value3
1 | 0.2 |0.3 | 0.4
2 |0.2 |0.3 | 0.2
方案1
使用以下方式, 在toArray().tolist() 这一步非常消耗内存,导致后续无法写表保存结果
def to_array(col):
def to_array_(v):
return v.toArray().tolist()
return udf(to_array_, ArrayType(DoubleType()))(col)
def get_feat(df):
df = df.withColumn('embed', to_array(col(‘values’)))\
.select(['col_a’,’values’,’label'] + [col('embed')[i] for i in range(embed_dim)])
for i in range(embed_dim):
df = df.withColumnRenamed('embed['+str(i)+']','embed_'+str(i))
df = df.drop('values')
return df
方案2
将value这一列转成 list形式,然后保存表
df = df.select(df("values").cast(ArrayType))
df.write.saveAsTable(‘db_name.table_name’)
读取的时候,用hive sql 来拆分这一列
fe = spark.sql('''select passenger_phone,
values[0] as f0,
values[1] as f1,
values[2] as f2,
values[3] as f3,
values[4] as f4,
values[5] as f5,
values[6] as f6,
values[7] as f7,
values[8] as f8,
values[9] as f9
from db_name.table_name''')
df4 = spark.read.option("inferSchema",True) \
.option("delimiter",",") \
.option("header","True")\
.csv("src/main/resources/zipcodes.csv")