常用建表语句,设置分区并设置表内容存储方式:
spark.sql(f"""
CREATE TABLE IF NOT EXISTS table_name (
`key` string,
`value` string
)
PARTITIONED BY(dt string COMMENT "日期分区")
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS ORC
""")
数据转成Dataframe,并创建临时表,插入到数据库表中:
import pandas as pd
a = 0.000001
b = 0.0002
value = f'\u007b"name":"xxxx","params":\u007b"model_name":"xgb","model_weight":{a},"threshold":{b},"merge":true,"ids":[1000260]\u007d\u007d'
conf = [
['a',value1],
['b',value2]
]
tmp_pd=pd.DataFrame(conf)
spark.sql(f"drop view if exists tmp_pd")
spark.createDataFrame(tmp_pd).createOrReplaceTempView("tmp_pd")
a=spark.sql(f"""
select * from tmp_pd
""").toPandas()
print(a.iloc[0,1])
dt='20221014'
spark.sql(f"""
INSERT OVERWRITE TABLE table_name
PARTITION(dt='{dt}')
select * from tmp_pd
""")
toPanda()操作展示数据不完全解决方法:
pd.set_option('display.max_columns', None) # 显示所有列
pd.set_option('max_colwidth',500) # 设置value的显示长度为500,默认为50,这里的500可以根据需求调大调小
spark.sql(f"""
select * from table_name
order by dt
""").toPandas()
sql查询出来的数据创建临时表并缓存:
data = spark.sql(f"""
select * from table_name
""").rdd \
.toDF(["a","b"])
data.cache()
data.createOrReplaceTempView("data_info_tb")